Image Sensors and Digital Photography- Digital Imaging Strategies- Converting Photons Into Images
Photons, Electrons, and Pictures
As I discussed in my previous article on digital imaging sensors, the photoelectric effect generates electrons whose energy states vary depending on the frequency of the electrons that impacted them. If you want to know what “color” those photons were, you can either check them individually, a rather daunting task; build the detector so that it responds to a single energy level or narrow range of levels (which has only recently become practical using quantum electronics), put a filter over the individual photodetectors to control what color gets through to them, or split the light into its color components and channel it to separate photodetectors.
The most common approach, the Bayer mask, relies on the filtration approach and clusters color pixels in groups of one red, two green, and one blue. (The reason for the greater number of green detectors is due to the dominance of mid-wavelength visible light in our visual environment- the human eye is the same way.) A byproduct of this configuration, though, is that your digital camera’s megapixel rating is deceptive- it only refers to the total number of photodetectors, and not all detectors see all colors. At best, one-half of them see any particular color. They rely on image processing to “figure out” what the other colors are based on the surrounding pixels.
Professional digital video cameras and other imaging devices that require precise color data split the light into separate channels and quantum imaging is being heavily researched for use in high-resolution infrared detectors, but neither of these is encountered in devices readily available to the consumer.
Omnivision recently employed a new strategy for CMOS sensors that promises to significantly improve performance. One of the disadvantages of the standard lithography process whereby the sensor is deposited on the silicon is that the circuits can scatter some of the inbound light and interfere with the sensitivity of the sensor. Omnivision flipped this arrangement upside down and put the silicon at the top, giving it an unobstructed view. Called backside illumination or BSI, this technique promises to improve high-ISO performance and decrease the size of CMOS sensors.
Another technique, pioneered by Sigma, is the Foveon X3 sensor; their approach is to mimic the eye’s structure, placing the red, green, and blue photodetectors on top of one another. This approach yields accurate color information for each pixel, making color more vivid and realistic (especially at the red end of the visible spectrum) at the cost of sensitivity (since the sensors are triple-stacked, the sensor size for a given number of “photodetector pixels” is smaller). In addition, the scattering effects mentioned above may contribute to the reported difficulties at higher ISO settings that have plagued cameras in the series.
Despite the occasional innovations, a total upset of the status quo seems very unlikely at this point. Of course, even a compromise can be pretty good if you have enough technology behind it, and Nikon and Canon, with the release of 12, 15, and 21 megapixel models, have shown that the current state of the art renders the Bayer approximation very good indeed.
This post is part of the series: Image Sensors and Digital Photography
Image sensors that make everything possible in the world of digital photography. Find out the basic technology behind the camera and some of the new strategies that could change the field.