The method Zivid::Frame::pointCloud() does not perform any copying from GPU memory.
Note
Zivid::Camera::capture() method returns at some moment in time after the camera completes capturing raw images.
The handle from Zivid::Frame::pointCloud() is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
Getting the property Zivid.NET.Frame.PointCloud does not perform any copying from GPU memory.
Note
Zivid.NET.Camera.Capture() method returns at some moment in time after the camera completes capturing raw images.
The handle from Zivid.NET.Frame.PointCloud is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy methods (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
The function zivid.frame.point_cloud() does not perform any copying from GPU memory.
Note
zivid.camera.capture() method returns at some moment in time after the camera completes capturing raw images.
The handle from zivid.frame.point_cloud() is available instantly.
However, the actual point cloud data becomes available only after the processing on the GPU is finished.
Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.
std::cout<<"Capturing frame"<<std::endl;frame=camera.capture2D3D(settings);std::cout<<"Copying colors with Zivid API from GPU to CPU"<<std::endl;autocolors=frame.frame2D().value().imageBGRA();std::cout<<"Casting the data pointer as a void*, since this is what the OpenCV matrix constructor requires."<<std::endl;auto*dataPtrZividAllocated=const_cast<void*>(static_cast<constvoid*>(colors.data()));std::cout<<"Wrapping this block of data in an OpenCV matrix. This is possible since the layout of \n"<<"Zivid::ColorBGRA exactly matches the layout of CV_8UC4. No copying occurs in this step."<<std::endl;constcv::MatbgraZividAllocated(colors.height(),colors.width(),CV_8UC4,dataPtrZividAllocated);std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image Zivid Allocated",bgraZividAllocated);cv::waitKey(0);
Copy selected data from GPU to CPU memory (user-allocated)
In the above example, ownership of the data was held by the returned Zivid::Array2D<> objects.
Alternatively, you may provide a pre-allocated memory buffer to Zivid::PointCloud::copyData(dataPtr).
The type of dataPtr defines what shall be copied (PointXYZ, ColorRGBA, etc.).
Now let us look at the exact same use case as above.
However, this time, we allow OpenCV to allocate the necessary storage.
Then we ask the Zivid API to copy data directly from the GPU into this memory location.
std::cout<<"Allocating the necessary storage with OpenCV API based on resolution info before any capturing"<<std::endl;autobgraUserAllocated=cv::Mat(resolution.height(),resolution.width(),CV_8UC4);std::cout<<"Capturing frame"<<std::endl;autoframe=camera.capture2D3D(settings);autopointCloud=frame.pointCloud();std::cout<<"Copying data with Zivid API from the GPU into the memory location allocated by OpenCV"<<std::endl;pointCloud.copyData(&(*bgraUserAllocated.begin<Zivid::ColorBGRA>()));std::cout<<"Displaying image"<<std::endl;cv::imshow("BGRA image User Allocated",bgraUserAllocated);cv::waitKey(0);
Sometimes you might not need a point cloud with as high spatial resolution as given from the camera.
You may then downsample the point cloud.
Note
Sampling (3D) describes a hardware-based sub-/downsample method that reduces the resolution of the point cloud during capture while also reducing the acquisition and capture time.
Downsampling can be done in-place, which modifies the current point cloud.
print("Computing normals and copying them to CPU memory")normals=point_cloud.copy_data("normals")
The Normals API computes the normal at each point in the point cloud and copies normals from the GPU memory to the CPU memory.
The result is a matrix of normal vectors, one for each point in the input point cloud.
The size of normals is equal to the size of the input point cloud.
std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(frame);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Setting up visualization");using(varvisualizer=newZivid.NET.Visualization.Visualizer()){Console.WriteLine("Visualizing point cloud");visualizer.Show(frame);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();}
You can visualize the point cloud from the point cloud object as well.
std::cout<<"Getting point cloud from frame"<<std::endl;autopointCloud=frame.pointCloud();std::cout<<"Setting up visualization"<<std::endl;Zivid::Visualization::Visualizervisualizer;std::cout<<"Visualizing point cloud"<<std::endl;visualizer.showMaximized();visualizer.show(pointCloud);visualizer.resetToFit();std::cout<<"Running visualizer. Blocking until window closes."<<std::endl;visualizer.run();
Console.WriteLine("Getting point cloud from frame");varpointCloud=frame.PointCloud;Console.WriteLine("Setting up visualization");varvisualizer=newZivid.NET.Visualization.Visualizer();Console.WriteLine("Visualizing point cloud");visualizer.Show(pointCloud);visualizer.ShowMaximized();visualizer.ResetToFit();Console.WriteLine("Running visualizer. Blocking until window closes.");visualizer.Run();
For more information, check out Visualization Tutorial, where we cover point cloud, color image, depth map, and normals visualization, with implementations using third party libraries.