2D + 3D Capture Strategy

Note, if you do not care about color information, jump straight to the next section, selecting 3D and 2D settings based on capture speed.

Many detection algorithms commonly used in piece-picking applications rely on 2D data to identify which object to pick. In this article, we provide insights into different ways to acquire 2D information, their pros and cons, and external lighting conditions. We also touch upon various 2D-3D approaches, their data quality, and how they affect cycle times.

There are two approaches to get 2D data:

  1. Separate 2D capture via camera.capture(Zivid::Settings2D).imageRGBA(), see 2D Image Capture Process.

  2. Part of 3D capture camera.capture(Zivid::Settings).pointCloud.copyImageRGBA(), see Point Cloud Capture Process.

Which one to use depends on your requirements and the machine vision pipeline. We advocate for a dedicated 2D capture as it provides better control over the 2D settings for color optimization and can leverage multi-threading and optimized scheduling. It also grants you increased flexibility in configuring desired camera resolution and projector settings. Utilizing 2D data from the 3D capture is simpler, but you may have to compromise speed to get desired 2D quality.

Tip

When you capture 2D separately you should disable RGB in the 3D capture. This saves both on acquisition and processing time. Disable RGB in 3D capture by setting Sampling::Color to disabled.

Our recommendation:
  • Separate 2D capture with full resolution and projector on.

  • Subsampled 3D capture with color disabled.

Camera resolution and 1-to-1 mapping

For accurate 2D segmentation and detection, it is beneficial with a high-resolution color image. Zivid 2+ has a 5 MPx imaging sensor, while Zivid 2 has a 2.3 MPx sensors. The following table shows the resolution outputs of the different cameras for both 2D and 3D captures.

2D capture resolutions

2D capture

Zivid 2

Zivid 2+

Full resolution

1944 x 1200

2448 x 2048

2x2 subsampled

972 x 600

1224 x 1024

4x4 subsampled

Not available

612 x 512

3D capture resolutions

3D capture

Zivid 2

Zivid 2+

Full resolution [1]

1944 x 1200

2448 x 2048

2x2 subsampled [1]

972 x 600

1224 x 1024

4x4 subsampled [1]

Not available

612 x 512

Output resolution of both 2D and 3D captures in controlled via the combination of the Sampling::Pixel and the Processing::Resampling settings, see pixel sampling and Resampling. This means that it is possible to no longer have a 1-to-1 correlation between a 2D pixel and a 3D point. Consequently, it is more challenging to extract the 3D data from a segmented mask in the 2D image.

As mentioned, it is common to require high-resolution 2D data for segmentation and detection. For example, our recommended preset for Consumer Goods Z2+ M130 Quality preset uses Sampling::Pixel set to blueSubsample2x2. In this case we should either:

  • Upsample the 3D data to restore 1-to-1 correspondence, or

  • Map 2D indices to the indices in the subsampled 3D data.

Resampling

In order to match the resolution of the 2D capture, simply apply an upsampling which undoes the subsampling. This retains the speed advantages of the subsampled capture. For example:

auto settings2D = Zivid::Settings2D{
    Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{} },
    Zivid::Settings2D::Sampling::Pixel::all,
};
auto settings = Zivid::Settings{
    Zivid::Settings::Engine::phase,
    Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} },
    Zivid::Settings::Sampling::Pixel::blueSubsample2x2,
    Zivid::Settings::Sampling::Color::disabled,
    Zivid::Settings::Processing::Resampling::Mode::upsample2x2,
};
settings_2d = zivid.Settings2D()
settings_2d.acquisitions.append(zivid.Settings2D.Acquisition())
settings_2d.sampling.pixel = zivid.Settings2D.Sampling.Pixel.all
settings = zivid.Settings()
settings.engine = "phase"
settings.acquisitions.append(zivid.Settings.Acquisition())
settings.sampling.pixel = zivid.Settings.Sampling.Pixel.blueSubsample2x2
settings.sampling.color = zivid.Settings.Sampling.Color.disabled
settings.processing.resampling.mode = zivid.Settings.Processing.Resampling.Mode.upsample2x2

For more details see Resampling.

The other option is to map the 2D indices to the indices in the subsampled 3D data. This option is a bit more complicated, but it is potentially more efficient. The point cloud can remain subsampled, and thus consume less memory and processing power.

To establish a correlation between the full-resolution 2D image and the subsampled point cloud, a specific mapping technique is required. This process involves extracting RGB values from the pixels that correspond to the Blue or Red pixels from the Bayer grid.

Zivid::Experimental::Calibration::pixelMapping(camera, settings); can be used to get parameters required to perform this mapping. Following is an example which uses this function.

const auto pixelMapping = Zivid::Experimental::Calibration::pixelMapping(camera, settings);
std::cout << "Pixel mapping: " << pixelMapping << std::endl;
cv::Mat mappedBGR(
    fullResolutionBGR.rows / pixelMapping.rowStride(),
    fullResolutionBGR.cols / pixelMapping.colStride(),
    CV_8UC3);
std::cout << "Mapped width: " << mappedBGR.cols << ", height: " << mappedBGR.rows << std::endl;
for(size_t row = 0; row < static_cast<size_t>(fullResolutionBGR.rows - pixelMapping.rowOffset());
    row += pixelMapping.rowStride())
{
    for(size_t col = 0; col < static_cast<size_t>(fullResolutionBGR.cols - pixelMapping.colOffset());
        col += pixelMapping.colStride())
    {
        mappedBGR.at<cv::Vec3b>(row / pixelMapping.rowStride(), col / pixelMapping.colStride()) =
            fullResolutionBGR.at<cv::Vec3b>(row + pixelMapping.rowOffset(), col + pixelMapping.colOffset());
    }
}
return mappedBGR;
pixel_mapping = calibration.pixel_mapping(camera, settings)
return rgba[
    int(pixel_mapping.row_offset) :: pixel_mapping.row_stride,
    int(pixel_mapping.col_offset) :: pixel_mapping.col_stride,
    0:3,
]

Note

If you use intrinsics and 2D and 3D capture have different resolutions, ensure you use them correctly. See Camera Intrinsics for more information.

External light considerations

The ideal light source for a 2D capture is strong, because it reduces the influence of ambient light, and diffuse, because this limits the blooming effects. This light source can either come from the internal projector or from an external light source. A third option is not to use any light at all.

Regardless of your chosen option, you may encounter blooming. When utilizing the internal projector as light source, tilting the camera, changing the background, or tuning the 2D acquisition settings can mitigate the blooming effect. On the other hand, if using external light, ensuring the light is diffuse or angling it may help. It’s important to note that external light introduces noise in the 3D data, and you should deactivate them during the 3D capture. Consequently, the use of external lights adds complexity to your cell setup and the scheduling of your machine vision pipeline

Exposure variations caused by changes in ambient light, such as transitions from day to night, doors opening and closing, or changes in ceiling lighting, affects 2D and 3D data differently. For 2D data, they can impact segmentation performance, especially when it is trained on specific datasets. For 3D data, exposure variations may affect point cloud completeness due to varying noise levels. Using either an internal projector or external diffuse light helps reduce these variations.

The below table summarizes the pros and cons of the different options with respect to 2D quality.

Internal projector

External light [2]

Ambient light

Robot Cell setup

Simple

Complex

Simple

Resilience to ambient light variations

Acceptable

Good

Bad

Blooming in 2D images

Likely

Unlikely

Likely

2D color balance needed

No

Likely

yes

Our recommendation:
  • Separate 2D capture with internal projector on

Capture strategies

Optimizing for 3D quality does not necessarily give you satisfactory 2D quality. Therefore, if you depend on color information, we recommend having a separate 2D capture. We can break it down to which data you need first. This gives us the three following strategies:

  • 2D data before 3D data

  • 2D data as part of 3D data

  • 2D data after 3D data

Which strategy you should go for depends on your machine vision algorithms and pipeline. Below we summarize the performance of the different strategies. For a more in-depth understanding and comprehensive ZividBenchmarks, please see 2D + 3D Capture Strategy

The following tables list the different 2D+3D capture configurations. It shows how they are expected to perform relative to each other with respect to speed and quality.

Capture Cycle

Speed

2D-Quality

Zivid 2

Zivid 2+

Faster

Fast

Best

3D ➞ 2D / 2D ➞ 3D

Fast

Fast

Best

3D (w/RGB enabled)

Fastest

Fastest

Good

Following is a table showing actual measurements on different hardware. For the 3D capture we use the Fast Consumer Goods settings.

Zivid 2+

(Z2+ M130 Fast)

Zivid 2

(Z2 M70 Fast)

Tip

To test different 2D-3D strategies on your PC, you can run ZividBenchmark.cpp sample with settings loaded from YML files. Go to Samples, and select C++ for instructions.

In the following section, we guide you on selecting 3D and 2D settings based on capture speed.

Version History

SDK

Changes

2.12.0

Acquisition time is reduced by up to 50% for 2D captures and up to 5% for 3D captures for Zivid 2+. Zivid One+ has reached its End-of-Life and is no longer supported.

2.11.0

Added support for redSubsample4x4 and blueSubsample4x4.