Optimizing Color Image

Introduction

This tutorial aims to help you optimize the color image quality of the captures from your Zivid camera. We first cover using the essential tools for getting good color information: tuning acquisition and color settings. Then we address the most common challenges (blooming/over-saturation and color inconsistency from HDR) and provide our recommendations on how to overcome them.

Tip

Zivid 2+ cameras have higher resolution and utilize a better demosaicing algorithm than Zivid 2 cameras; thus, Zivid 2+ outputs higher quality 2D images.

Adjusting Acquisition Settings

The quality of the color image you get from the Zivid camera depends on the acquisition settings. This tutorial does not cover finding a combination of exposure settings (exposure time, aperture, brightness, and gain) to get the desired dynamic range; Getting the Right Exposure for Good Point Clouds covers that. This tutorial focuses on tuning the specific acquisition settings that affect the color image quality.

Exposure time and projector brightness do not impact the color image quality. On the other hand, higher aperture values are key to avoiding blurry color images; see the figure below. A good rule-of-thumb is 5.66 or higher f-number values (smaller apertures). For a more detailed explanation and guidance, see Depth of Focus and Depth of Focus Calculator.

Single acquisition capture with aperture 10.37

Single acquisition capture with aperture 3.67

Single acquisition capture with aperture 10.37

Single acquisition capture with aperture 3.67

Note

The images above are captured outside of optimal imaging distance to emphasize the out-of-focus effect for large aperture values; images are less blurry when captured in the optimal range.

We advise using low gain values, e.g., 1-2, because high gain values increase image noise (granularity) levels, thus decreasing the 2D image quality; see the figure below.

Single acquisition capture with gain 1

Single acquisition capture with gain 16

Single acquisition capture with gain 1

Single acquisition capture with gain 16

We often adjust the acquisition settings to reach the required dynamic range and capture time, focusing on point cloud quality first. With such an approach, the color image quality is a function of the settings we optimized for the 3D quality with a given capture time. The downside of this approach is that it does not always provide good color image quality. However, the fact is that high-quality color images (low blur, low noise, and balanced illumination) result in high-quality point clouds. With this in mind, it becomes clear that following the above fine-tuning guidelines to improve the color image quality also enhances the 3D quality. The key to achieving this is compensating for low gain and apertures (high f-numbers) with increased exposure time and projector brightness values.

Adjusting Color Settings

In addition to aperture and gain, the color image quality depends on Gamma, Color Balance, and Color Mode settings. This section aims to give insights into optimizing these settings to get desired color quality in the color image. More information about the color settings can be found here: Processing Settings.

Gamma

Cameras encode luminance differently than the human eye. While the human eye has an emphasis on the darker end of the spectrum, a camera encodes luminance on a linear scale. To compensate for this effect, gamma correction is applied to either darken or brighten the image and make it closer to human perception.

Note

The Gamma correction value ranges between 0.25 and 1.5 for Zivid cameras.

The lower the Gamma value, the lighter the scene will appear. If the Gamma value is increased, then the scene will appear darker.

Capture of a bin full of consumer goods with gamma set to 0.6

Capture of a bin full of consumer goods with gamma set to 1.3

Image captured with Gamma set to 0.6

Image captured with Gamma set to 1.3

It is questionable whether gamma correction is required for optimizing image quality for machine vision algorithms. Still, it helps us humans evaluate certain aspects of the color image quality, such as focus and grain/noise levels.

For an implementation example, check out Gamma Correction. This tutorial demonstrates how to capture a 2D image with a configurable gamma correction.

Color Balance

The color temperature of ambient light affects the appearance of the color image. You can adjust the digital gain to red, green, and blue color channels to make the color image look natural. Below, you can see an image before and after balancing the color. If you want to find the color balance for your settings automatically, follow the Color Balance tutorial.

Note

The color balance gains range between 1.0 and 8.0 for Zivid cameras.

Two images of a box of sweets. The green color is too strong in the left image.

Performing color balance can be beneficial in strong and varying ambient light conditions. Color balance is necessary when using captures without a projector or with low projector brightness values. In other words, when ambient light is a significant part of the light seen by the camera. There are two default color balance settings, with and without the projector. When projector brightness is set to 0 or off, the color balance is calibrated to 4500 K, typical in industrial environments. For projector brightness values above 0, the color balance is calibrated for the color temperature of the projector light.

For an implementation example, see Adjusting Color Balance tutorial. This tutorial shows how to balance the color of a 2D image by taking images of a white surface (a piece of paper, wall, or similar) in a loop.

Color Mode

The Color Mode setting controls how the color image is computed. Color Mode setting has the following possible values:

  • ToneMapping

  • UseFirstAcquisition

  • Automatic

ToneMapping uses all the acquisitions to create one merged and normalized color image. For multi-acquisition HDR captures the dynamic range of the captured images is usually higher than the 8-bit color image range. Tone mapping will map the HDR color data to the 8-bit color output range by applying a scaling factor. Tone mapping can also be used for single-acquisition captures to normalize the captured color image to the full 8-bit output. When using ToneMapping the color values can be inconsistent over repeated captures if you move, add or remove objects in the scene. For the most control of the colors, use the UseFirstAcquisition mode.

UseFirstAcquisition uses the color data acquired from the first acquisition provided. If the capture consists of more than one acquisition, then the remaining acquisitions are not used for the color image. No tone mapping is performed. This option provides the most control of the color image, and the color values will be consistent over repeated captures with the same settings.

  • Automatic is the default option.

  • Automatic is identical to UseFirstAcquisition for single-acquisition captures and multi-acquisition captures where all the acquisitions have identical (duplicated) acquisition settings.

  • Automatic is identical to ToneMapping for multi-acquisition HDR captures with differing acquisition settings.

Note

Since SDK 2.7 it is possible to disable tone mapping for HDR captures by setting UseFirstAcquisition for Color Mode.

For single-acquisition captures, tone mapping can be used to brighten dark images.

Single acquisition capture with Color Mode set to :code:`UseFirstAcquisition` or :code:`Automatic` (no tone mapping)

Single acquisition capture with Color Mode set to :code:`ToneMapping`

Single acquisition capture with Color Mode set to UseFirstAcquisition or Automatic (no tone mapping)

Single acquisition capture with Color Mode set to ToneMapping

For multi-acquisition HDR tone mapping can be used to map high-dynamic-range colors to the more limited dynamic range output.

Single acquisition capture of the first of three HDR acquisitions with Color Mode set to :code:`UseFirstAcquisition` or :code:`Automatic` (no tone mapping)

Single acquisition capture of the second of three HDR acquisitions with Color Mode set to :code:`UseFirstAcquisition` or :code:`Automatic` (no tone mapping)

Single acquisition capture of the third of three HDR acquisitions with Color Mode set to :code:`UseFirstAcquisition` or :code:`Automatic` (no tone mapping)

Single acquisition capture of the first of three HDR acquisitions with Color Mode set to UseFirstAcquisition or Automatic (no tone mapping)

Single acquisition capture of the second of three HDR acquisitions with Color Mode set to UseFirstAcquisition or Automatic (no tone mapping)

Single acquisition capture of the third of three HDR acquisitions with Color Mode set to UseFirstAcquisition or Automatic (no tone mapping)

HDR with three acquisitions with Color Mode set to :code:`ToneMapping` or :code:`Automatic`

HDR with three acquisitions with Color Mode set to ToneMapping or Automatic

HDR capture with UseFirstAcquisition

If you do not want to use tone mapping for your multi-acquisition HDR, but instead use the color image of one of the acquisitions, that is possible. Identify which of the acquisitions you want to use the color from. Then, make sure that acquisition is the first in the sequence of acquisition settings for your HDR capture, and set the Color Mode to UseFirstAcquisition. For the above example, the resulting color image will look like the color image of the single capture using the first of the three HDR acquisitions.

Hint

Make acquisition first in the sequence by clicking Move to top in Zivid Studio.

Move acquisition to top, making it first in the sequence

If you want to change the acquisition from which the HDR capture gets the color image, just rearrange the acquisition settings. UseFirstAcquisition Color Mode is recommended for keeping color consistent over repeated captures, useful for e.g, object classification based on color or texture in 2D image. For detailed explanation with implementation example, check out How to deal with Color Inconsistency from HDR.

Check out how to set processing settings with Zivid SDK, including Gamma, Color Balance, and Color Mode:

Go to source

source

std::cout << "Configuring settings for capture:" << std::endl;
Zivid::Settings settings{
    Zivid::Settings::Engine::phase,
    Zivid::Settings::Sampling::Color::rgb,
    Zivid::Settings::Sampling::Pixel::all,
    Zivid::Settings::RegionOfInterest::Box::Enabled::yes,
    Zivid::Settings::RegionOfInterest::Box::PointO{ 1000, 1000, 1000 },
    Zivid::Settings::RegionOfInterest::Box::PointA{ 1000, -1000, 1000 },
    Zivid::Settings::RegionOfInterest::Box::PointB{ -1000, 1000, 1000 },
    Zivid::Settings::RegionOfInterest::Box::Extents{ -1000, 1000 },
    Zivid::Settings::RegionOfInterest::Depth::Enabled::yes,
    Zivid::Settings::RegionOfInterest::Depth::Range{ 200, 2000 },
    Zivid::Settings::Processing::Filters::Smoothing::Gaussian::Enabled::yes,
    Zivid::Settings::Processing::Filters::Smoothing::Gaussian::Sigma{ 1.5 },
    Zivid::Settings::Processing::Filters::Noise::Removal::Enabled::yes,
    Zivid::Settings::Processing::Filters::Noise::Removal::Threshold{ 7.0 },
    Zivid::Settings::Processing::Filters::Noise::Suppression::Enabled::yes,
    Zivid::Settings::Processing::Filters::Noise::Repair::Enabled::yes,
    Zivid::Settings::Processing::Filters::Outlier::Removal::Enabled::yes,
    Zivid::Settings::Processing::Filters::Outlier::Removal::Threshold{ 5.0 },
    Zivid::Settings::Processing::Filters::Reflection::Removal::Enabled::yes,
    Zivid::Settings::Processing::Filters::Reflection::Removal::Mode::global,
    Zivid::Settings::Processing::Filters::Cluster::Removal::Enabled::yes,
    Zivid::Settings::Processing::Filters::Cluster::Removal::MaxNeighborDistance{ 10 },
    Zivid::Settings::Processing::Filters::Cluster::Removal::MinArea{ 100 },
    Zivid::Settings::Processing::Filters::Experimental::ContrastDistortion::Correction::Enabled::yes,
    Zivid::Settings::Processing::Filters::Experimental::ContrastDistortion::Correction::Strength{ 0.4 },
    Zivid::Settings::Processing::Filters::Experimental::ContrastDistortion::Removal::Enabled::no,
    Zivid::Settings::Processing::Filters::Experimental::ContrastDistortion::Removal::Threshold{ 0.5 },
    Zivid::Settings::Processing::Filters::Hole::Repair::Enabled::yes,
    Zivid::Settings::Processing::Filters::Hole::Repair::HoleSize{ 0.2 },
    Zivid::Settings::Processing::Filters::Hole::Repair::Strictness{ 1 },
    Zivid::Settings::Processing::Color::Balance::Red{ 1.0 },
    Zivid::Settings::Processing::Color::Balance::Green{ 1.0 },
    Zivid::Settings::Processing::Color::Balance::Blue{ 1.0 },
    Zivid::Settings::Processing::Color::Gamma{ 1.0 },
    Zivid::Settings::Processing::Color::Experimental::Mode::automatic
};
std::cout << settings << std::endl;
Go to source

source

Console.WriteLine("Configuring settings for capture:");
var settings = new Zivid.NET.Settings()
{
    Engine = Zivid.NET.Settings.EngineOption.Phase,
    Sampling = { Color = Zivid.NET.Settings.SamplingGroup.ColorOption.Rgb, Pixel = Zivid.NET.Settings.SamplingGroup.PixelOption.All },
    RegionOfInterest = { Box = {
                            Enabled = true,
                            PointO = new Zivid.NET.PointXYZ{ x = 1000, y = 1000, z = 1000 },
                            PointA = new Zivid.NET.PointXYZ{ x = 1000, y = -1000, z = 1000 },
                            PointB = new Zivid.NET.PointXYZ{ x = -1000, y = 1000, z = 1000 },
                            Extents = new Zivid.NET.Range<double>(-1000, 1000),
                        },
                        Depth =
                        {
                            Enabled = true,
                            Range = new Zivid.NET.Range<double>(200, 2000),
                        }
    },
    Processing = { Filters = { Smoothing = { Gaussian = { Enabled = true, Sigma = 1.5 } },
                               Noise = { Removal = { Enabled = true, Threshold = 7.0 },
                                         Suppression = { Enabled = true },
                                         Repair ={ Enabled = true } },
                               Outlier = { Removal = { Enabled = true, Threshold = 5.0 } },
                               Reflection = { Removal = { Enabled = true, Mode = ReflectionFilterModeOption.Global} },
                               Cluster = { Removal = { Enabled = true, MaxNeighborDistance = 10, MinArea = 100} },
                               Hole = { Repair = { Enabled = true, HoleSize = 0.2, Strictness = 1 } },
                               Experimental = { ContrastDistortion = { Correction = { Enabled = true,
                                                                                      Strength = 0.4 },
                                                                       Removal = { Enabled = true,
                                                                                   Threshold = 0.5 } } } },
                   Color = { Balance = { Red = 1.0, Green = 1.0, Blue = 1.0 },
                             Gamma = 1.0,
                             Experimental = { Mode = ColorModeOption.Automatic } } }
};
Console.WriteLine(settings);
Go to source

source

print("Configuring settings for capture:")
settings = zivid.Settings()
settings.engine = "phase"
settings.sampling.color = "rgb"
settings.sampling.pixel = "all"
settings.region_of_interest.box.enabled = True
settings.region_of_interest.box.point_o = [1000, 1000, 1000]
settings.region_of_interest.box.point_a = [1000, -1000, 1000]
settings.region_of_interest.box.point_b = [-1000, 1000, 1000]
settings.region_of_interest.box.extents = [-1000, 1000]
settings.region_of_interest.depth.enabled = True
settings.region_of_interest.depth.range = [200, 2000]
filters = settings.processing.filters
filters.smoothing.gaussian.enabled = True
filters.smoothing.gaussian.sigma = 1.5
filters.noise.removal.enabled = True
filters.noise.removal.threshold = 7.0
filters.noise.suppression.enabled = True
filters.noise.repair.enabled = True
filters.outlier.removal.enabled = True
filters.outlier.removal.threshold = 5.0
filters.reflection.removal.enabled = True
filters.reflection.removal.mode = "global"
filters.cluster.removal.enabled = True
filters.cluster.removal.max_neighbor_distance = 10
filters.cluster.removal.min_area = 100
filters.experimental.contrast_distortion.correction.enabled = True
filters.experimental.contrast_distortion.correction.strength = 0.4
filters.experimental.contrast_distortion.removal.enabled = False
filters.experimental.contrast_distortion.removal.threshold = 0.5
filters.hole.repair.enabled = True
filters.hole.repair.hole_size = 0.2
filters.hole.repair.strictness = 1
color = settings.processing.color
color.balance.red = 1.0
color.balance.blue = 1.0
color.balance.green = 1.0
color.gamma = 1.0
settings.processing.color.experimental.mode = "automatic"
print(settings)

Dealing with Blooming

As discussed in Blooming - Bright Spots in the Point Cloud, blooming is an effect that occurs when extremely intense light from a point or region hits the imaging sensor and results in over-saturation. In this article we discuss how to avoid blooming in the scene.

There are multiple ways to handle blooming. The methods covered in this tutorial are: changing the background, changing the camera position and orientation, utilizing HDR, utilizing Color Mode, and taking an additional 2D capture.

Change the background

If the background is the blooming source, change the background material to a more diffuse and absorptive material (Optical Properties of Materials).

Scene with white background with blooming

Same scene with black background and effect from blooming removed from the point cloud

Scene with white background with blooming

Same scene with black background and effect from blooming removed from the point cloud

Angle the camera

Changing the camera position and orientation is a simple and efficient way of dealing with blooming. It is preferable to offset the camera and tilt it so that the projector and other light sources do not directly reflect into the camera. This is shown on the right side of the image below.

Positioning the camera to avoid blooming

By simply tilting the camera, the data lost in the over-saturated region can be recovered, as seen in the right side of the image above. The left image below shows a point cloud taken when the camera is mounted perpendicular to the surface, while the right image shows the scene taken at a slight tilt.

Blooming in the Zivid point cloud and how to fix it

A simple rule of thumb is to mount the camera so that the region of interest is in front of the camera as shown in the image below:

Positioning the camera with regards to region of interest to avoid blooming

HDR capture

Use multi-acquisition 3D HDR by adding one or more 3D acquisitions to cover the blooming highlights. Keep in mind this will come at the cost of added capture time.

Scene with blooming (single acquisition)

Same scene with effect from blooming removed from the point cloud (multi-acquisition HDR)

Scene with blooming (single acquisition)

Same scene with effect from blooming removed from the point cloud (multi-acquisition HDR)

Following the above steps will most likely recover the missing points in the point cloud due to the blooming effect. However, there is still a chance that the over-saturated area remains in the color image.

Over-saturation in the color image from blooming

Over-saturation in the color image might not be an issue if you care only about the 3D point cloud quality. However, over-saturation can be a problem if you utilize machine vision algorithms on the color image, e.g., template matching.

Note

The default Color Mode is Automatic which is identical to ToneMapping for multi-acquisition HDR captures with differing acquisition settings. The color merge (tone mapping) algorithm used for HDR captures is the source of the over-saturation in the color image. This algorithm solves the challenging problem of mapping color images of different dynamic ranges into one color image with a limited dynamic range. However, the tone mapping algorithm has its limitation: the over-saturation problem.

HDR capture with UseFirstAcquisition Color Mode

Note

This solution can be used only with SDK 2.7 and higher. Change the KB to an older version in the top left corner to see a solution for SDK 2.6 or lower.

An attempt to overcome the over-saturation is to identify or find the acquisition optimized for the brightest object in the scene. Then, set that acquisition to be the first in the acquisition settings. Finally, capture your HDR with Color Mode set to UseFirstAcquisition.

Hint

Make acquisition first in the sequence by clicking Move to top in Zivid Studio.

Move acquisition to top, making it first in the sequence

In some cases, over-saturation can be removed or at least significantly reduced.

Over-saturation in the color image removed with HDR capture with UseFirstAcquisition Color Mode

If the material of the object of imaging is specular, this method may not remove the over-saturation. In that case, it is worth considering an additional capture with the projector turned off; see the following potential solution (Without the projector).

Additional capture

An alternative solution to overcome over-saturation in the color image is to add a separate capture and optimize its settings specifically for avoiding this image artifact. This approach assumes using the point cloud data from the main capture and the color image from the additional capture. The additional capture can be a 2D or 3D capture, with or without a projector. If you use 3D capture, it must be without tone mapping (Color Mode setting set to UseFirstAcquisition).

Note

Take the additional capture before or after the main capture. Decide based on, e.g., the algorithm execution time if you use different threads for algorithms that utilize 2D images and 3D point clouds.

Tip

Capturing a separate 2D image allows you to optimize the acquisition settings for color image quality (in most cases, we optimize settings for excellent point cloud quality).

With the projector

In some cases, over-saturation can be removed with the projector in use.

Over-saturation in the color image removed with additional 2D capture with projector

Without the projector

If the material of the object of imaging is specular, it is less likely that over-saturation will be removed. Therefore, it is worth considering turning the projector off.

Over-saturation in the color image removed with additional 2D capture without projector

If capturing without the projector, you must ensure the camera gets sufficient light. The options are using settings with longer exposure times, higher gain values, and lower aperture values or adding an additional light source in the scene. Use diffuse lighting and turn it on only during the color image acquisition. If turned on during the main acquisition, the additional light source will likely decrease the point cloud quality.

Without the projector with color balance

Balancing the color is also most likely necessary when the projector is not used. For an implementation example, see Adjusting Color Balance tutorial. This tutorial shows how to balance the color of a 2D image by taking images of a white surface (a piece of paper, wall, or similar) in a loop.

Over-saturation in the color image removed with additional 2D capture without projector and white balance

Dealing with Color Inconsistency from HDR

The color image from the multi-acquisition HDR capture is a result of tone mapping if Color Mode is set to ToneMapping or Automatic (default). While tone mapping solves the challenging problem of optimizing the color for that particular capture, it has a downside. Since it is a function of the scene, the tone mapping technique introduces color inconsistency with changes in the scene. The following example explains this phenomenon.

Let us say we have a relatively dark scene (pears on a black surface). We find acquisition settings that cover a wide enough dynamic range and perform a multi-acquisition HDR capture (figure on the left). We then add a bright object (banana) to the scene and capture it again with the same settings (figure on the right).

HDR capture of a dark scene (Color Mode: Automatic or ToneMapping)

HDR capture with the same settings of the same scene but with an additional bright object added

HDR capture of a dark scene (Color Mode: Automatic or ToneMapping)

HDR capture with the same settings of the same scene but with an additional bright object added

Let us look at the output color image (figure on the right) and specifically at the dark objects initially in the scene (pears or black surface). We notice that the RGB values of these objects are different before and after adding the bright object (banana) to the scene.

The change of RGB values can be a problem for some applications, e.g., ones using algorithms that classify objects based on color information. The reason is that these algorithms will expect the RGB values to remain the same (consistent) for repeated captures.

HDR capture with UseFirstAcquisition Color Mode

Note

This solution can be used only with SDK 2.7 and higher. Change the KB to an older version in the top left corner to see a solution for SDK 2.6 or lower.

To overcome the color inconsistency from HDR, identify which of the acquisitions from your HDR capture gives the best color. We recommend the acquisition optimized for the brightest object in the scene to avoid saturation. Then, set that acquisition to be the first in the acquisition settings. Finally, capture your HDR with Color Mode set to UseFirstAcquisition.

We will walk you through the process with an example. Let us assume that we have an HDR with two acquisitions. The first of the acquisitions is optimized for the dark objects (pears). The second is optimized for bright objects (banana). HDR capture with UseFirstAcquisition for Color Mode yields the following results.

HDR capture of a dark scene (Color Mode: UseFirstAcquisition)

HDR capture with the same settings of the same scene but with an additional bright object added

HDR capture of a dark scene (Color Mode: UseFirstAcquisition)

HDR capture with the same settings of the same scene but with an additional bright object added

The color of the dark objects (pears) is the same in both images. The color consistency is preserved.

However, because the first acquisition is optimized for the dark objects, the brightest object in the scene (banana) is saturated. Saturation will likely cause issues, e.g., if we want to classify objects based on color. To overcome this problem, we can rearrange the acquisition settings. For the first acquisition we select the one optimized for the brightest object in the scene (banana). The second is optimized for the dark objects (pears). Now, we see that the color consistency is preserved with bright and dark objects captured together and separately. In addition, the brightest object (banana) is not saturated.

HDR capture of a dark scene (Color Mode: UseFirstAcquisition)

HDR capture with the same settings of the same scene but with an additional bright object added

HDR capture with the same settings of the same scene but only with the bright object

HDR capture of a dark scene (Color Mode: UseFirstAcquisition)

HDR capture with the same settings of the same scene but with an additional bright object added

HDR capture with the same settings of the same scene but only with the bright object

Note

The acquisition that provides the best color is an excellent acquisition. It is optimized for the brightest objects in the scene and thus also provides very good SNR for those objects. An additional acquisition is not needed to deal with color inconsistency in HDR; that acquisition is likely already part of your HDR acquisition settings.

If the color image is too dark, it can be fixed with the Gamma setting.

Caution

The first of the acquisitions the Capture Assistant returns is likely not the most suitable one for the color image. Therefore, if using the Capture Assistant and UseFirstAcquisition for Color Mode, you might need to rearrange your acquisitions.

Hint

Make acquisition first in the sequence by clicking Move to top in Zivid Studio.

Move acquisition to top, making it first in the sequence

Additional Capture

Note

This solution should be used only if you have to use Automatic or ToneMapping Color Mode for your HDR capture.

An alternative solution to overcome the color inconsistency from HDR is to take a separate capture in addition to the main capture. This approach assumes using the main capture for getting the point cloud data and the additional capture for getting the color image. The additional capture can be a 2D or 3D capture, with or without a projector. If you use 3D capture, it must be without tone mapping (Color Mode setting set to UseFirstAcquisition).

Single acquisition capture of a dark scene with Color Mode set to UseFirstAcquisition or Automatic

Single acquisition capture with the same settings of the same scene but with an additional bright object added (Color Mode set to UseFirstAcquisition or Automatic)

Single acquisition capture of a dark scene with Color Mode set to UseFirstAcquisition or Automatic

Single acquisition capture with the same settings of the same scene but with an additional bright object added (Color Mode set to UseFirstAcquisition or Automatic)

Further reading

Continue to How to Get Good 3D Data on a Pixel of Interest.