It is possible to convert the organized point cloud to an unorganized point cloud.
While doing so, all NaN values are removed, and the point cloud is flattened to a 1D array.
You can now selectively copy data based on what is required.
This is the complete list of output data formats and how to copy them from the GPU.
Most of these APIs also applies to the unorganized point cloud.
std::cout<<"Capturing frame"<<std::endl;frame=camera.capture2D3D(settings);std::cout<<"Copying colors with Zivid API from GPU to CPU"<<std::endl;autocolors=frame.frame2D().value().imageBGRA_SRGB();std::cout<<"Casting the data pointer as a void*, since this is what the OpenCV matrix constructor requires."<<std::endl;auto*dataPtrZividAllocated=const_cast<void*>(static_cast<constvoid*>(colors.data()));std::cout<<"Wrapping this block of data in an OpenCV matrix. This is possible since the layout of \n"<<"Zivid::ColorBGRA_SRGB exactly matches the layout of CV_8UC4. No copying occurs in this step."<<std::endl;constcv::MatbgraZividAllocated(colors.height(),colors.width(),CV_8UC4,dataPtrZividAllocated);std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image Zivid Allocated",bgraZividAllocated);cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS);
std::cout<<"Allocating the necessary storage with OpenCV API based on resolution info before any capturing"<<std::endl;autobgraUserAllocated=cv::Mat(resolution.height(),resolution.width(),CV_8UC4);std::cout<<"Capturing frame"<<std::endl;autoframe=camera.capture2D3D(settings);autopointCloud=frame.pointCloud();std::cout<<"Copying data with Zivid API from the GPU into the memory location allocated by OpenCV"<<std::endl;pointCloud.copyData(&(*bgraUserAllocated.begin<Zivid::ColorBGRA_SRGB>()));std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image User Allocated",bgraUserAllocated);cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS);
Copy unorganized point cloud data from GPU to CPU memory (Open3D-tensor)
open3d::t::geometry::PointCloudcopyToOpen3D(constZivid::UnorganizedPointCloud&pointCloud){usingnamespaceopen3d::core;autodevice=Device("CPU:0");autoxyzTensor=Tensor({static_cast<int64_t>(pointCloud.size()),3},Dtype::Float32,device);autorgbTensor=Tensor({static_cast<int64_t>(pointCloud.size()),3},Dtype::Float32,device);pointCloud.copyData(reinterpret_cast<Zivid::PointXYZ*>(xyzTensor.GetDataPtr<float>()));// Open3D does not store colors in 8-bitconstautorgbaColors=pointCloud.copyColorsRGBA_SRGB();for(size_ti=0;i<pointCloud.size();++i){constautor=static_cast<float>(rgbaColors(i).r)/255.0f;constautog=static_cast<float>(rgbaColors(i).g)/255.0f;constautob=static_cast<float>(rgbaColors(i).b)/255.0f;rgbTensor.SetItem(TensorKey::Index(i),Tensor::Init({r,g,b}));}open3d::t::geometry::PointCloudcloud(device);cloud.SetPointPositions(xyzTensor);cloud.SetPointColors(rgbTensor);returncloud;}
The following example shows how create a new instance of Zivid::UnorganizedPointCloud with a transformation applied to it.
Note that in this sample is is not necessary to create a new instance, as the untransformed point cloud is not used after the transformation.
Zivid::UnorganizedPointCloud supports voxel downsampling.
The API takes two arguments:
voxelSize - the size of the voxel in millimeters.
minPointsPerVoxel - the minimum number of points per voxel to keep it.
Voxel downsampling subdivides 3D space into a grid of cubic voxels with a given size.
If a given voxel contains a number of points at or above the given limit, all those source points are replaced with a single point with the following properties:
Position (XYZ) is an SNR-weighted average of the source points' positions, i.e. a high-confidence source point will have a greater influence on the resulting position than a low-confidence one.
Color (RGBA) is the average of the source points' colors.
Signal-to-noise ratio (SNR) is sqrt(sum(SNR^2)) of the source points' SNR values, i.e. the SNR of a new point will increase with both the number and the confidence of the source points that were used to compute its position.
Using minPointsPerVoxel > 1 is particularly useful for removing noise and artifacts from unorganized point clouds that are a combination of point clouds captured from different angles.
This is because a given artifact is most likely only present in one of the captures, and minPointsPerVoxel can be used to only fill voxels that both captures "agree" on.
std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(frame);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Setting up visualization");using(varvisualizer=newZivid.NET.Visualization.Visualizer()){Console.WriteLine("Visualizing point cloud");visualizer.Show(frame);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();}
std::cout<<"Getting point cloud from frame"<<std::endl;autopointCloud=frame.pointCloud();std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(pointCloud);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Getting point cloud from frame");varpointCloud=frame.PointCloud;Console.WriteLine("Setting up visualization");varvisualizer=newZivid.NET.Visualization.Visualizer();Console.WriteLine("Visualizing point cloud");visualizer.Show(pointCloud);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();
Added support for Zivid::UnorganizedPointCloud.
transformed is added as a function to Zivid::PointCloud (also available in Zivid::UnorganizedPointCloud).