Detectable Light Intensity in a Camera Capture
A camera is very similar to our eyes because it converts streams of photons to images containing information. The imaging sensor in the camera contains a grid of pixels. Each pixel counts the number of photons that hit it during a reading phase, also known as exposure, and outputs an intensity score.
The camera’s dynamic range defines the ratio between the minimum and maximum measurable light intensity from black to white and is an essential property in photography.
For single image capture, an imaging sensor can measure a certain range of light intensity per pixel. A typical sensor can distinguish the intensity of incident light in 256 to 4096 different levels, or 8-12 bits.
For illustration, let us consider the figure below which shows a typical signal consisting of a sine-wave with growing amplitude and thermal noise.
The lowest measurable intensity is limited by noise, typically read and quantization noise. The light intensity that hits the pixel must be larger than this noise floor to be detectable. See the far left in the figures above and below. The middle regions, showing low and high SNR, are in the range where the signal is quantifiable. As the signal grows and the light intensity gets high, the pixel gets saturated. Saturation causes clipping and other artifacts, resulting in loss of information. See this region to the far right. The range from the lowest to the highest intensity value that the sensor can distinguish is called the dynamic range (DR). The ratio between the incident light originating from our point of interest and the added noise is called the signal-to-noise-ratio (SNR). The graph below describes the relationship between signal level, noise, and light intensity.
The imaging sensor often cannot capture the complete dynamic range of the scene, and the result is an optical phenomenon called clipping. Clipping is when the intensity in a certain area falls outside the minimum and maximum intensity that the sensor can represent. Clipping occurs with high contrast scenes, where both very bright and very dark are present.
The camera’s upper and lower readout limits are roughly fixed in terms of photon count. However, changing the camera’s exposure alters the number of photons entering the camera, hitting the pixel, and getting read. Two different exposure values, EV, require a different number of photons to hit a pixel to be detected with the same readout intensity or contrast value. In essence, the objective is to alter the light reaching the camera based on the region of interest. This allows the imaging sensor to operate in a healthy working region with a decent SNR. Increasing the readout value of dark objects requires higher exposure, and decreasing the value of bright objects requires lower exposure.
To learn more about exposure, we recommend that you read our next article Introduction to Stops.