How to deal with Blooming
Problem
As discussed in Blooming - Bright Spots in the Point Cloud, blooming is an effect that occurs when extremely intense light from a point or region hits the imaging sensor and results in over-saturation. In this article we discuss how to avoid blooming in the scene.
Potential Solutions
There are multiple ways to handle blooming. The methods covered in this tutorial are: changing the background, changing the camera position and orientation, utilizing HDR, utilizing Color Mode, and taking an additional 2D capture.
Change the background
If the background is the blooming source, change the background material to a more diffuse and absorptive material (Optical Properties of Materials).
Scene with white background with blooming |
Same scene with black background and effect from blooming removed from the point cloud |
Angle the camera
Changing the camera position and orientation is a simple and efficient way of dealing with blooming. It is preferable to offset the camera and tilt it so that the projector and other light sources do not directly reflect into the camera. This is shown on the right side of the image below.
By simply tilting the camera, the data lost in the over-saturated region can be recovered, as seen in the right side of the image above. The left image below shows a point cloud taken when the camera is mounted perpendicular to the surface, while the right image shows the scene taken at a slight tilt.
A simple rule of thumb is to mount the camera so that the region of interest is in front of the camera as shown in the image below:
HDR capture
Use multi-acquisition 3D HDR by adding one or more 3D acquisitions to cover the blooming highlights. Keep in mind this will come at the cost of added capture time.
Scene with blooming (single acquisition) |
Same scene with effect from blooming removed from the point cloud (multi-acquisition HDR) |
Following the above steps will most likely recover the missing points in the point cloud due to the blooming effect. However, there is still a chance that the over-saturated area remains in the color image.
Over-saturation in the color image might not be an issue if you care only about the 3D point cloud quality. However, over-saturation can be a problem if you utilize machine vision algorithms on the color image, e.g., template matching.
Note
The default Color Mode is Automatic which is identical to ToneMapping for multi-acquisition HDR captures with differing acquisition settings. The color merge (tone mapping) algorithm used for HDR captures is the source of the over-saturation in the color image. This algorithm solves the challenging problem of mapping color images of different dynamic ranges into one color image with a limited dynamic range. However, the tone mapping algorithm has its limitation: the over-saturation problem.
HDR capture with UseFirstAcquisition Color Mode
Note
This solution can be used only with SDK 2.7 and higher. Change the KB to an older version in the top left corner to see a solution for SDK 2.6 or lower.
An attempt to overcome the over-saturation is to identify or find the acquisition optimized for the brightest object in the scene. Then, set that acquisition to be the first in the acquisition settings. Finally, capture your HDR with Color Mode set to UseFirstAcquisition.
Hint
Make acquisition first in the sequence by clicking … → Move to top in Zivid Studio.
In some cases, over-saturation can be removed or at least significantly reduced.
If the material of the object of imaging is specular, this method may not remove the over-saturation. In that case, it is worth considering an additional capture with the projector turned off; see the following potential solution (Without the projector).
Additional capture
An alternative solution to overcome over-saturation in the color image is to add a separate capture and optimize its settings specifically for avoiding this image artifact. This approach assumes using the point cloud data from the main capture and the color image from the additional capture. The additional capture can be a 2D or 3D capture, with or without a projector. If you use 3D capture, it must be without tone mapping (Color Mode setting set to UseFirstAcquisition).
Note
Take the additional capture before or after the main capture. Decide based on, e.g., the algorithm execution time if you use different threads for algorithms that utilize 2D images and 3D point clouds.
Tip
Capturing a separate 2D image allows you to optimize the acquisition settings for color image quality (in most cases, we optimize settings for excellent point cloud quality).
With the projector
In some cases, over-saturation can be removed with the projector in use.
Without the projector
If the material of the object of imaging is specular, it is less likely that over-saturation will be removed. Therefore, it is worth considering turning the projector off.
If capturing without the projector, you must ensure the camera gets sufficient light. The options are using settings with longer exposure times, higher gain values, and lower aperture values or adding an additional light source in the scene. Use diffuse lighting and turn it on only during the color image acquisition. If turned on during the main acquisition, the additional light source will likely decrease the point cloud quality.
Without the projector with color balance
Balancing the color is also most likely necessary when the projector is not used. For an implementation example, see Adjusting Color Balance tutorial. This tutorial shows how to balance the color of a 2D image by taking images of a white surface (a piece of paper, wall, or similar) in a loop.
Note
It is intuitive and conceptually correct to use a 2D capture as the additional capture. If using a Zivid Two camera, always go for the 2D capture. However, if using Zivid One+, consider the limitation below on switching between 2D and 3D capture calls.
Limitation
Here, we explain limitation when performing captures in a sequence while switching between 2D and 3D capture calls.
Caution
If you perform captures in a sequence where you switch between 2D and 3D capture calls, the Zivid One+ (not Zivid Two) cameras have a switching time penalty. This time penalty happens only if the 2D capture settings use brightness > 0 because different patterns need to be flashed to the projector controller, and this takes time. As a result, there is a delay between the captures when switching the capture mode (2D and 3D). The delay is approximately 350 ms when switching from 3D to 2D and 650 ms when switching from 2D to 3D. Therefore, there can be roughly 1 s overhead in addition to the 2D capture time and 3D capture time. In SDK 2.6 and beyond, this limitation only happens when using 2D captures with brightness > 0.
2D Capture Settings |
||
Projector Brightness = 0 |
Projector Brightness > 0 |
|
Zivid Two |
None |
None |
Zivid One+ |
None |
350 - 900 ms switching time penalty |
Tip
Zivid Two cameras do not have the time penalty that Zivid One+ cameras have; switching between 2D and 3D capture modes with Zivid Two happens instantly.
Note
Switching time between 3D and 2D capture modes has been removed for Zivid One+ cameras in SDK 2.6. This applies when 2D capture is used with the projector turned off (projector brightness setting set to 0).
For Zivid One+ cameras, if you must use the projector, taking another 3D capture for the color image may be less time-consuming than taking another 2D capture with the projector. This approach assumes you use the point cloud data from the main 3D capture (single or multi-acquisition HDR) and the color image from the additional 3D capture. If you use a single acquisition for the main 3D capture, use the same exposure time for the additional 3D capture to optimize the capture time. If you use multi-acquisition HDR, the exposure time of the last HDR acquisition should be the same as the exposure time of the additional 3D capture.