2D + 3D Capture Strategy

Note, if you do not care about color information, jump straight to the next section, selecting 3D and 2D settings based on capture speed.

Many detection algorithms commonly used in piece-picking applications rely on 2D data to identify which object to pick. In this article, we provide insights into different ways to acquire 2D information, their pros and cons, and external lighting conditions. We also touch upon various 2D-3D approaches, their data quality, and how they affect cycle times.

There are two approaches to get 2D data:

  1. Separate 2D capture via camera.capture(Zivid::Settings2D).imageRGBA(), see 2D Image Capture Process.

  2. Part of 3D capture camera.capture(Zivid::Settings).pointCloud.copyImageRGBA(), see Point Cloud Capture Process.

Which one to use depends on your requirements and the machine vision pipeline. We advocate for a dedicated 2D capture as it provides better control over the 2D settings for color optimization and can leverage multi-threading and optimized scheduling. It also grants you increased flexibility in configuring desired camera resolution and projector settings. Utilizing 2D data from the 3D capture is simpler, but you may have to compromise speed to get desired 2D quality.

Tip

By taking a separate 2D capture, you can disable color in your 3D capture by setting Sampling::Color to disabled. This will reduce the capture time for the 3D acquisition.

Our recommendation:
  • Separate 2D capture with full resolution and projector on.

  • Subsampled 3D capture with color disabled.

Camera resolution and 1-to-1 mapping

For accurate 2D segmentation and detection, it is beneficial with a high-resolution color image. Zivid 2+ has a 5 MPx imaging sensor, while Zivid 2 and One+ have 2.3 MPx sensors. The following table shows the resolution outputs of the different cameras for both 2D and 3D captures.

2D capture resolutions

2D capture

Zivid One+

Zivid 2

Zivid 2+

Full Resolution

1920 x 1200

1944 x 1200

2448 x 2048

3D capture resolutions

3D capture

Zivid One+

Zivid 2

Zivid 2+

Full resolution [1]

1920 x 1200

1944 x 1200

2448 x 2048

2x2 subsampled [1]

Not available

972 x 600

1224 x 1024

4x4 subsampled [1]

Not available

Not available

612 x 512

Observe that 2D captures will output full-resolution images while 3D captures may be subsampled depending on pixel sampling. This means that we no longer have a 1-to-1 correlation between a 2D pixel and a 3D point. Consequently, it is more challenging to extract the 3D data from a segmented mask in the 2D image. To restore the correlation, we can either subsample or downsample the 2D image,

Full Resolution

Full resolution

Subsampled

Quarter resolution subsampled, zoomed

Downsampled

Quarter resolution downsampled, zoomed
Full resolution
Quarter resolution subsampled, zoomed again
Quarter resolution downsampled, zoomed again

or recompute the mapping by extracting RGB values from the pixels that correspond to the Blue or Red pixels from they Bayer grid. The code below shows how to do this:

Go to source

source

const auto pixelMapping = Zivid::Experimental::Calibration::pixelMapping(camera, settings);
std::cout << "Pixel mapping: " << pixelMapping << std::endl;
cv::Mat mappedBGR(
    fullResolutionBGR.rows / pixelMapping.rowStride(),
    fullResolutionBGR.cols / pixelMapping.colStride(),
    CV_8UC3);
std::cout << "Mapped width: " << mappedBGR.cols << ", height: " << mappedBGR.rows << std::endl;
for(size_t row = 0; row < static_cast<size_t>(fullResolutionBGR.rows - pixelMapping.rowOffset());
    row += pixelMapping.rowStride())
{
    for(size_t col = 0; col < static_cast<size_t>(fullResolutionBGR.cols - pixelMapping.colOffset());
        col += pixelMapping.colStride())
    {
        mappedBGR.at<cv::Vec3b>(row / pixelMapping.rowStride(), col / pixelMapping.colStride()) =
            fullResolutionBGR.at<cv::Vec3b>(row + pixelMapping.rowOffset(), col + pixelMapping.colOffset());
    }
}
return mappedBGR;
Go to source

source

pixel_mapping = calibration.pixel_mapping(camera, settings)
return rgba[
    pixel_mapping.row_offset :: pixel_mapping.row_stride, pixel_mapping.col_offset :: pixel_mapping.col_stride, 0:3
]

For more insight into resolution, sampling and mapping, check out Monochrome Capture.

Note

If you use intrinsics and 2D and 3D capture have different resolutions, ensure you use them correctly. See Camera Intrinsics for more information.

Our recommendation:
  • 2D capture with full resolution

  • 3D monochrome capture with subsampled resolution

External light considerations

The ideal light source for a 2D capture is strong, because it reduces the influence of ambient light, and diffuse, because this limits the blooming effects. This light source can either come from the internal projector or from an external light source. A third option is not to use any light at all.

Regardless of your chosen option, you may encounter blooming. When utilizing the internal projector as light source, tilting the camera, changing the background, or tuning the 2D acquisition settings can mitigate the blooming effect. On the other hand, if using external light, ensuring the light is diffuse or angling it may help. It’s important to note that external light introduces noise in the 3D data, and you should deactivate them during the 3D capture. Consequently, the use of external lights adds complexity to your cell setup and the scheduling of your machine vision pipeline

Exposure variations caused by changes in ambient light, such as transitions from day to night, doors opening and closing, or changes in ceiling lighting, affects 2D and 3D data differently. For 2D data, they can impact segmentation performance, especially when it is trained on specific datasets. For 3D data, exposure variations may affect point cloud completeness due to varying noise levels. Using either an internal projector or external diffuse light helps reduce these variations.

The below table summarizes the pros and cons of the different options with respect to 2D quality.

Internal projector

External light [2]

Ambient light

Robot Cell setup

Simple

Complex

Simple

Resilience to ambient light variations

Acceptable

Good

Bad

Blooming in 2D images

Likely

Unlikely

Likely

2D color balance needed

No

Likely

yes

Zivid One+ projector switching penalty

On Zivid One+ it is important to be aware of the switching penalty that occurs when the projector is on during 2D capture. This time-penalty only happens if the 2D capture settings use brightness > 0. For more information, see Limitation when performing captures in a sequence while switching between 2D and 3D capture calls.

If there is enough time in between each capture cycle it is possible to mitigate the switching limitation. We can take on the penalty while the system is doing something else. For example, while the robot is moving in front of the camera. In this tutorial, we call this a dummy capture.

Note

For 2 and 2+ there is no longer any switching limitation. The FW adapts to the following three scenarios:

  1. Capture 2D in a loop, with the same settings

  2. Capture 3D in a loop, with the same settings

  3. Capture 2D and then 3D, or vice versa, with the same settings in each loop

Do not apply a dummy capture for 2 and 2+.

Our recommendation:
  • Separate 2D capture with internal projector on

Capture strategies

Optimizing for 3D quality does not necessarily give you satisfactory 2D quality. Therefore, if you depend on color information, we recommend having a separate 2D capture. We can break it down to which data you need first. This gives us the three following strategies:

  • 2D data before 3D data

  • 2D data as part of 3D data

  • 2D data after 3D data

Which strategy you should go for depends on your machine vision algorithms and pipeline. Below we summarize the performance of the different strategies. For a more in-depth understanding and comprehensive ZividBenchmarks, please see 2D + 3D Capture Strategy

The following tables list the different 2D+3D capture configurations. It shows how they are expected to perform relative to each other with respect to speed and quality. We separate into two scenarios:

  • Cycle time is so fast that each capture cycle needs to happen right after the other.

  • Cycle time is slow enough to allow an additional dummy capture between each capture cycle (only relevant for Zivid One+). An additional capture can take up to 800ms in the worst case. A rule of thumb is that for cycle time greater than 2 seconds a dummy capture saves time.

Back-to-back captures

Capture Cycle (no wait between cycles)

Speed

2D-Quality

Zivid One+

Zivid 2

Zivid 2+

2D with Projector

2D without Projector

Faster

Fast

Best

3D ➞ 2D [4]

Slower

Faster

Faster

Fast

Best

2D ➞ 3D [3]

Slowest

Fast

Fast

Faster

Best

3D (w/2D [6])

Slow

Slowest

Slowest

Slowest

Best

3D

Fastest

Fastest

Fastest

Good

Note

One+ only

For back-to-back captures, it is not possible to avoid switching delay, unless the projector brightness is the same. However, in this case, it is better to set Color Mode to UseFirstAcquisition, see Color Mode.

Captures with low duty cycle

Capture Cycle (time to wait for next cycle)

Speed

2D-Quality

Zivid One+

Zivid 2

Zivid 2+

2D with Projector

2D without Projector

Faster

Fast

Best

3D ➞ 2D [4] ➞ 3D ([5])

Slow

Fast

Faster

Fast

Best

2D ➞ 3D [3] ➞ 2D ([5])

Slower

Faster

Fast

Faster

Best

3D (w/2D [6]) ➞ 3D (w/2D [6])

Slowest

Slowest

Slowest

Best

3D ➞ 3D

Fastest

Fastest

Fastest

Good

Following is a table showing actual measurements on different hardware. For the 3D capture we use the Fast Consumer Goods settings.

Zivid 2+

(Z2+ M130 Fast)

Zivid 2

(Z2 M70 Fast)

Zivid One+

(Z1+ M Fast)

Tip

To test different 2D-3D strategies on your PC, you can run ZividBenchmark.cpp sample with settings loaded from YML files. Go to Samples, and select C++ for instructions.

In the following section, we guide you on selecting 3D and 2D settings based on capture speed.

Version History

SDK

Changes

2.11.0

Added support for redSubsample4x4 and blueSubsample4x4.