In the last couple of years or so we have seen the size of image sensors in high-end smartphones increase quite dramatically. At the same time pixel counts have skyrocketed, driven, at least in part, by the use of pixel-binning technology to capture images with lower noise levels and a wider dynamic range than would be possible with conventional sensor technology.

The sensor in the main camera of Samsung’s latest flagship smartphone, the Galaxy S20 Ultra, is a prime example for both these trends. At 1/1.33″ it’s one of the currently largest (only the 1/1.28″ chip in the Huawei P40 Pro is bigger) and a whopping 108MP resolution allows for pixel binning and all sorts of computational imaging wizardry to produce 12MP high-quality default output.

In terms of pixel binning this latest Samsung sensor taken thins even one step further than previous generations. Instead of four it combines 9 pixels into one for an effective pixel size of 2.4µm.

Now we’ve learned that the South Korean company has no intentions to stop there. In a blog post on the company website, Samsung’s Head of Sensor Business Team Yongin Park explains that it is the company’s goal to design and produce image sensors that go beyond the resolution of the human eye which is said to be around 500MP.

However, Yong is aware that numerous challenges have to be overcome to achieve this goal.

‘In order to fit millions of pixels in today’s smartphones that feature other cutting-edge specs like high screen-to-body ratios and slim designs, pixels inevitably have to shrink so that sensors can be as compact as possible.

On the flip side, smaller pixels can result in fuzzy or dull pictures, due to the smaller area that each pixel receives light information from. The impasse between the number of pixels a sensor has and pixels’ sizes has become a balancing act that requires solid technological prowess,’ he writes.

Launched in 2013, Samsung’s ISOCELL technology has been paramount in allowing for more and more pixels to be implemented on smartphone image sensors, by isolating pixels from each other and thus reducing light spill and reflections between them. first this was done using metal ‘barriers’. Later generations used an unspecified ‘innovative material’.

Tetracell technology came along in 2017 and used 2×2 pixel binning to increase the effective pixel size. It was superseded by the company’s Nonacell tech and its 3×3 pixel arrays earlier this year. At the same time Samsung engineers were also able to reduce pixel size to a minuscule 0.7μm. According to Park this was previously believed to be impossible.

So, what can we expect from Samsung’s sensor division in the medium and long term? Park says that the company is ‘aiming for 600MP for all’ but doesn’t provide much detail on how this could be achieved. These sensors would not necessarily be exclusive to use in smartphones, however, and could be implemented for a wide range of applications.

‘To date, the major applications for image sensors have been in the smartphones field, but this is expected to expand soon into other rapidly-emerging fields such as autonomous vehicles, IoT and drones,’ he explains.

In addition the company is looking at applications for its sensors that go beyond photography and videography. According to Park, sensors that are capable of detecting wavelengths outside of the range of human eyes are still rare, but could benefit in areas such as cancer diagnosis in medicine or quality control in agriculture.

Author: Go to Source