When you capture with Zivid, you get a frame in return.
The point cloud is stored in the frame, and the frame is stored in the GPU memory.
The capture can contain color or not, depending of the method that you call.
For more information see this table with different capture modes.
The method Zivid::Frame::pointCloud() does not perform any copying from GPU memory.
Note
Zivid::Camera::capture2D3D() and Zivid::Camera::capture3D() methods return at some moment in time after the camera completes capturing raw images.
The handle from Zivid::Frame::pointCloud() is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
Getting the property Zivid.NET.Frame.PointCloud does not perform any copying from GPU memory.
Note
Zivid.NET.Camera.Capture2D3D() and Zivid.NET.Camera.Capture3D() methods return at some moment in time after the camera completes capturing raw images.
The handle from Zivid.NET.Frame.PointCloud is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy methods (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
The function zivid.frame.point_cloud() does not perform any copying from GPU memory.
Note
zivid.camera.capture_2d_3d() and zivid.camera.capture_3d() methods return at some moment in time after the camera completes capturing raw images.
The handle from zivid.frame.point_cloud() is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
It is possible to convert the organized point cloud to an unorganized point cloud.
While doing so, all NaN values are removed, and the point cloud is flattened to a 1D array.
You can now selectively copy data based on what is required.
This is the complete list of output data formats and how to copy them from the GPU.
Most of these APIs also applies to the unorganized point cloud.
Return type
Copy functions
Data per pixel
Total data
Zivid::Array2D<Zivid::PointXYZ>
PointCloud::copyPointsXYZ() or PointCloud::copyData<Zivid::PointXYZ>()
12 bytes
28 MB
Zivid::Array2D<Zivid::PointXYZW>
PointCloud::copyPointsXYZW() or PointCloud::copyData<Zivid::PointXYZW>()
16 bytes
37 MB
Zivid::Array2D<Zivid::PointZ>
PointCloud::copyPointsZ() or PointCloud::copyData<Zivid::PointZ>()
4 bytes
9 MB
Zivid::Array2D<Zivid::ColorRGBA>
PointCloud::copyColorsRGBA() or PointCloud::copyData<Zivid::ColorRGBA>()
4 bytes
9 MB
Zivid::Array2D<Zivid::SNR>
PointCloud::copySNRs() or PointCloud::copyData<Zivid::SNR>()
4 bytes
9 MB
Zivid::Array2D<Zivid::PointXYZColorRGBA>
PointCloud::copyData<PointXYZColorRGBA>()
16 bytes
37 MB
Zivid::Array2D<Zivid::PointXYZColorBGRA>
PointCloud::copyPointsXYZColorsBGRA() or PointCloud::copyData<PointXYZColorBGRA>()
std::cout<<"Capturing frame"<<std::endl;frame=camera.capture2D3D(settings);std::cout<<"Copying colors with Zivid API from GPU to CPU"<<std::endl;autocolors=frame.frame2D().value().imageBGRA_SRGB();std::cout<<"Casting the data pointer as a void*, since this is what the OpenCV matrix constructor requires."<<std::endl;auto*dataPtrZividAllocated=const_cast<void*>(static_cast<constvoid*>(colors.data()));std::cout<<"Wrapping this block of data in an OpenCV matrix. This is possible since the layout of \n"<<"Zivid::ColorBGRA_SRGB exactly matches the layout of CV_8UC4. No copying occurs in this step."<<std::endl;constcv::MatbgraZividAllocated(colors.height(),colors.width(),CV_8UC4,dataPtrZividAllocated);std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image Zivid Allocated",bgraZividAllocated);cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS);
Copy selected data from GPU to CPU memory (user-allocated)
In the above example, ownership of the data was held by the returned Zivid::Array2D<> objects.
Alternatively, you may provide a pre-allocated memory buffer to Zivid::PointCloud::copyData(dataPtr).
The type of dataPtr defines what shall be copied (PointXYZ, ColorRGBA, etc.).
Now let us look at the exact same use case as above.
However, this time, we allow OpenCV to allocate the necessary storage.
Then we ask the Zivid API to copy data directly from the GPU into this memory location.
std::cout<<"Allocating the necessary storage with OpenCV API based on resolution info before any capturing"<<std::endl;autobgraUserAllocated=cv::Mat(resolution.height(),resolution.width(),CV_8UC4);std::cout<<"Capturing frame"<<std::endl;autoframe=camera.capture2D3D(settings);autopointCloud=frame.pointCloud();std::cout<<"Copying data with Zivid API from the GPU into the memory location allocated by OpenCV"<<std::endl;pointCloud.copyData(&(*bgraUserAllocated.begin<Zivid::ColorBGRA_SRGB>()));std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image User Allocated",bgraUserAllocated);cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS);
Copy unorganized point cloud data from GPU to CPU memory (Open3D-tensor)
open3d::t::geometry::PointCloudcopyToOpen3D(constZivid::UnorganizedPointCloud&pointCloud){autodevice=open3d::core::Device("CPU:0");autoxyzTensor=open3d::core::Tensor({static_cast<int64_t>(pointCloud.size()),3},open3d::core::Dtype::Float32,device);autorgbTensor=open3d::core::Tensor({static_cast<int64_t>(pointCloud.size()),3},open3d::core::Dtype::Float32,device);pointCloud.copyData(reinterpret_cast<Zivid::PointXYZ*>(xyzTensor.GetDataPtr<float>()));// Open3D does not store colors in 8-bitauto*rgbPtr=rgbTensor.GetDataPtr<float>();autorgbaColors=pointCloud.copyColorsRGBA_SRGB();for(size_ti=0;i<pointCloud.size();++i){rgbPtr[3*i]=static_cast<float>(rgbaColors(i).r)/255.0f;rgbPtr[3*i+1]=static_cast<float>(rgbaColors(i).g)/255.0f;rgbPtr[3*i+2]=static_cast<float>(rgbaColors(i).b)/255.0f;}open3d::t::geometry::PointCloudcloud(device);cloud.SetPointPositions(xyzTensor);cloud.SetPointColors(rgbTensor);returncloud;}
The following example shows how create a new instance of Zivid::UnorganizedPointCloud with a transformation applied to it.
Note that in this sample is is not necessary to create a new instance, as the untransformed point cloud is not used after the transformation.
Sometimes you might not need a point cloud with as high spatial resolution as given from the camera.
You may then downsample the point cloud.
Note
Sampling (3D) describes a hardware-based sub-/downsample method that reduces the resolution of the point cloud during capture while also reducing the acquisition and capture time.
Note
Zivid::UnorganizedPointCloud does not support downsampling, but it does support voxel downsampling, see Voxel downsample.
Downsampling can be done in-place, which modifies the current point cloud.
Zivid::UnorganizedPointCloud supports voxel downsampling.
The API takes two arguments:
voxelSize - the size of the voxel in millimeters.
minPointsPerVoxel - the minimum number of points per voxel to keep it.
Voxel downsampling subdivides 3D space into a grid of cubic voxels with a given size.
If a given voxel contains a number of points at or above the given limit, all those source points are replaced with a single point with the following properties:
Position (XYZ) is an SNR-weighted average of the source points’ positions, i.e. a high-confidence source point will have a greater influence on the resulting position than a low-confidence one.
Color (RGBA) is the average of the source points’ colors.
Signal-to-noise ratio (SNR) is sqrt(sum(SNR^2)) of the source points’ SNR values, i.e. the SNR of a new point will increase with both the number and the confidence of the source points that were used to compute its position.
Using minPointsPerVoxel > 1 is particularly useful for removing noise and artifacts from unorganized point clouds that are a combination of point clouds captured from different angles.
This is because a given artifact is most likely only present in one of the captures, and minPointsPerVoxel can be used to only fill voxels that both captures “agree” on.
print("Computing normals and copying them to CPU memory")normals=point_cloud.copy_data("normals")
The Normals API computes the normal at each point in the point cloud and copies normals from the GPU memory to the CPU memory.
The result is a matrix of normal vectors, one for each point in the input point cloud.
The size of normals is equal to the size of the input point cloud.
std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(frame);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Setting up visualization");using(varvisualizer=newZivid.NET.Visualization.Visualizer()){Console.WriteLine("Visualizing point cloud");visualizer.Show(frame);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();}
You can visualize the point cloud from the point cloud object as well.
std::cout<<"Getting point cloud from frame"<<std::endl;autopointCloud=frame.pointCloud();std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(pointCloud);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Getting point cloud from frame");varpointCloud=frame.PointCloud;Console.WriteLine("Setting up visualization");varvisualizer=newZivid.NET.Visualization.Visualizer();Console.WriteLine("Visualizing point cloud");visualizer.Show(pointCloud);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();
For more information, check out Visualization Tutorial, where we cover point cloud, color image, depth map, and normals visualization, with implementations using third party libraries.
Added support for Zivid::UnorganizedPointCloud.
transformed is added as a function to Zivid::PointCloud (also available in Zivid::UnorganizedPointCloud).