Point Cloud Capture Process

The capture API returns at some moment in time after the camera completes capturing raw images and before or at the moment the point cloud processing is done. This depends on the GPU (vendor and driver). Therefore, after the capture API returns, the point cloud processing might still be running in the background.


Use recommended hardware to minimize computation time.

Go to source


const auto frame = camera.capture(settings);

The API to get the point cloud always returns a point cloud object right away. The point cloud object holds a handle to the point cloud data in GPU memory. Point cloud processing might also still be running in the background at this point.

Go to source


const auto pointCloud = frame.pointCloud();

We need to call the API to copy the point cloud from the GPU memory to the CPU memory. The copy function will block and wait until the data is available before copying it. When the function returns, the data is available and ready for use in CPU memory.


Even if using a CPU with an integrated GPU, the data is copied to a different area on the same main RAM.

Go to source


const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

When the point cloud is available on the CPU memory, we can utilize it in our machine vision application.

Performance considerations

Zivid Point Cloud Manipulation Functions

Operations on the point cloud in Zivid SDK, e.g., Downsample, Transform, and Normals, are computed on the GPU while the point cloud is still in GPU memory. This allows the user to avoid the extra time of moving the data back and forth between GPU and CPU for computation. By downsampling on the GPU with Zivid API, there is also less data to copy to the CPU. Therefore, it is beneficial to use Zivid SDK for these operations for performance reasons. Implementing these point cloud operations with a third-party library can be more time-consuming. On average, if the computations are performed on the CPU they will be slower. If the same computations are done by other software on the GPU it would require the data to be copied from GPU to the CPU and back again.

Reliability in Exposure Time


This applies only for Zivid Two. Zivid One+ will not have this feature.

Zivid Two will queue the exposures from each acquisition back to back when capturing using HDR mode. The camera is done capturing the scene when the projector has finished flashing the patterns. This will allow you to reliably time when the capturing of the scene is done and the scene can be changed. This can be, for example, either moving the robot or moving the objects in the scene.

For example if you use an HDR capture consisting of a \(10ms\) exposure time and a \(5ms\) exposure time you can use the following equation. This assumes that you are using the phase engine for the 3D capture. The phase engine consists of 13 image patterns per an exposure.

\[(10ms * 13) + (5ms * 13) + 5ms_{tolerance} = 200ms\]

The robot can move after \(200ms\), with a margin for safety included at the end of the 3D acquisition time. The number 13 accounts for the number of patterns in the phase engine. Use the following table to determine the number of patterns for an acquisition:








Be aware that using the Zivid Two at a high duty cycle can trigger the thermal safety mechanism. This will make the timing unpredictable, given it will slow down the camera’s duty cycle to prevent overheating. This can occur if the duty cycle is above 50% and the brightness is above 1.0. For more on this topic check out the projector brightness page.

Version History




Improved capture speed of Zivid Two.