Point Cloud Tutorial

Introduction

This tutorial describes how to use Zivid SDK to work with Point Cloud data.

Tip

If you prefer watching a video, our webinar Getting your point cloud ready for your application covers the Point Cloud Tutorial.

Prerequisites

Frame

The Zivid::Frame contains the point cloud and color image (stored on compute device memory) and the capture and camera information.

The Zivid.NET.Frame contains the point cloud and color image (stored on compute device memory) and the capture and camera information.

The zivid.Frame contains the point cloud and color image (stored on compute device memory) and the capture and camera information.

Capture

When you capture with Zivid, you get a frame in return.

Go to source

source

const auto frame = camera.capture2D3D(settings);
Go to source

source

using (var frame = camera.Capture2D3D(settings))
Go to source

source

with camera.capture_2d_3d(settings) as frame:

Check Capture Tutorial for detailed instructions on how to capture.

Load

The frame can also be loaded from a ZDF file.

Go to source

source

const auto dataFile = std::string(ZIVID_SAMPLE_DATA_DIR) + "/Zivid3D.zdf";
std::cout << "Reading ZDF frame from file: " << dataFile << std::endl;
const auto frame = Zivid::Frame(dataFile);
Go to source

source

var dataFile =
    Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) + "/Zivid/Zivid3D.zdf";
Console.WriteLine("Reading ZDF frame from file: " + dataFile);
var frame = new Zivid.NET.Frame(dataFile);
Go to source

source

data_file = get_sample_data_path() / "Zivid3D.zdf"
print(f"Reading point cloud from file: {data_file}")
frame = zivid.Frame(data_file)

Point Cloud

Get handle from Frame

You can now get a handle to the point cloud data on the GPU.

Go to source

source

const auto pointCloud = frame.pointCloud();
Go to source

source

var pointCloud = frame.PointCloud;
Go to source

source

point_cloud = frame.point_cloud()

Point cloud contains XYZ, RGB, and SNR, laid out on a 2D grid.

For more info check out Point Cloud Structure.

The method Zivid::Frame::pointCloud() does not perform any copying from GPU memory.

Note

Zivid::Camera::capture() method returns at some moment in time after the camera completes capturing raw images. The handle from Zivid::Frame::pointCloud() is available instantly. However, the actual point cloud data becomes available only after the processing on the GPU is finished. Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.

For detailed explanation, see Point Cloud Capture Process.

Getting the property Zivid.NET.Frame.PointCloud does not perform any copying from GPU memory.

Note

Zivid.NET.Camera.Capture() method returns at some moment in time after the camera completes capturing raw images. The handle from Zivid.NET.Frame.PointCloud is available instantly. However, the actual point cloud data becomes available only after the processing on the GPU is finished. Any calls to data-copy methods (section below) will block and wait for processing to finish before proceeding with the requested copy operation.

For detailed explanation, see Point Cloud Capture Process.

The function zivid.frame.point_cloud() does not perform any copying from GPU memory.

Note

zivid.camera.capture() method returns at some moment in time after the camera completes capturing raw images. The handle from zivid.frame.point_cloud() is available instantly. However, the actual point cloud data becomes available only after the processing on the GPU is finished. Any calls to data-copy functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation.

For detailed explanation, see Point Cloud Capture Process.

Copy from GPU to CPU memory

You can now selectively copy data based on what is required. This is the complete list of output data formats and how to copy them from the GPU.

Return type

Copy functions

Data per pixel

Total data

Zivid::Array2D<Zivid::PointXYZ>

PointCloud::copyPointsXYZ() or PointCloud::copyData<Zivid::PointXYZ>()

12 bytes

28 MB

Zivid::Array2D<Zivid::PointXYZW>

PointCloud::copyPointsXYZW() or PointCloud::copyData<Zivid::PointXYZW>()

16 bytes

37 MB

Zivid::Array2D<Zivid::PointZ>

PointCloud::copyPointsZ() or PointCloud::copyData<Zivid::PointZ>()

4 bytes

9 MB

Zivid::Array2D<Zivid::ColorRGBA>

PointCloud::copyColorsRGBA() or PointCloud::copyData<Zivid::ColorRGBA>()

4 bytes

9 MB

Zivid::Array2D<Zivid::SNR>

PointCloud::copySNRs() or PointCloud::copyData<Zivid::SNR>()

4 bytes

9 MB

Zivid::Array2D<Zivid::PointXYZColorRGBA>

PointCloud::copyData<PointXYZColorRGBA>()

16 bytes

37 MB

Zivid::Array2D<Zivid::PointXYZColorBGRA>

PointCloud::copyPointsXYZColorsBGRA() or PointCloud::copyData<PointXYZColorBGRA>()

16 bytes

37 MB

Zivid::Image<Zivid::ColorRGBA>

PointCloud::copyImageRGBA()

4 bytes

9 MB

Zivid::Image<Zivid::ColorBGRA>

PointCloud::copyImageBGRA()

4 bytes

9 MB

Zivid::Image<Zivid::ColorsRGB>

PointCloud::copyImagesRGB()

4 bytes

9 MB

Return type

Copy methods

Data per pixel

Total data

float[height,width,3]

PointCloud.CopyPointsXYZ()

12 bytes

28 MB

float[height,width,4]

PointCloud.CopyPointsXYZW()

16 bytes

37 MB

float[height,width,1]

PointCloud.CopyPointsZ()

4 bytes

9 MB

byte[height,width,4]

PointCloud.CopyColorsRGBA()

4 bytes

9 MB

float[height,width]

PointCloud.CopySNRs()

4 bytes

9 MB

Zivid.NET.PointXYZColorRGBA[height, width]

PointCloud.CopyPointsXYZColorsRGBA()

16 bytes

37 MB

Zivid.NET.PointXYZColorBGRA[height, width]

PointCloud.CopyPointsXYZColorsBGRA()

16 bytes

37 MB

Zivid.NET.ImageRGBA

PointCloud.CopyImageRGBA()

4 bytes

9 MB

Zivid.NET.ImageBGRA

PointCloud.CopyImageBGRA()

4 bytes

9 MB

Zivid.NET.ImageSRGB

PointCloud.CopyImageSRGB()

4 bytes

9 MB

Return type

Copy functions

Data per pixel

Total data

numpy.ndarray([height,width,3], dtype=float32)

PointCloud.copy_data("xyz")

12 bytes

28 MB

numpy.ndarray([height,width,3], dtype=float32)

PointCloud.copy_data("xyzw")

16 bytes

37 MB

numpy.ndarray([height,width], dtype=float32)

PointCloud.copy_data("z")

4 bytes

9 MB

numpy.ndarray([height,width,4], dtype=uint8)

PointCloud.copy_data("rgba")

4 bytes

9 MB

numpy.ndarray([height,width,4], dtype=uint8)

PointCloud.copy_data("bgra")

4 bytes

9 MB

numpy.ndarray([height,width,4], dtype=uint8)

PointCloud.copy_data("srgb")

4 bytes

9 MB

numpy.ndarray([height,width], dtype=float32)

PointCloud.copy_data("snr")

4 bytes

9 MB

numpy.ndarray([height,width], dtype=[('x', '<f4'), ('y', '<f4'), ('z', '<f4'), ('r', 'u1'), ('g', 'u1'), ('b', 'u1'), ('a', 'u1')])

PointCloud.copy_data("xyzrgba")

16 bytes

37 MB

Here is an example of how to copy data.

Go to source

source

const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
Go to source

source

var pointCloudData = pointCloud.CopyPointsXYZColorsRGBA();
Go to source

source

xyz = point_cloud.copy_data("xyz")
rgba = point_cloud.copy_data("rgba")

Memory allocation options

In terms of memory allocation, there are two ways to copy data:

  • The Zivid SDK can allocate a memory buffer and copy data to it.

  • A user can pass a pointer to a pre-allocated memory buffer, and the Zivid SDK will copy the data to the pre-allocated memory buffer.

We present examples for the two memory allocation options using OpenCV.

Copy selected data from GPU to CPU memory (Zivid-allocated)

If you are only concerned about e.g. RGB color data of the point cloud, you can copy only that data to the CPU memory.

Go to source

source

std::cout << "Capturing frame" << std::endl;
frame = camera.capture(settings);
pointCloud = frame.pointCloud();

std::cout << "Copying colors with Zivid API from GPU to CPU" << std::endl;
auto colors = pointCloud.copyColorsBGRA();

std::cout << "Casting the data pointer as a void*, since this is what the OpenCV matrix constructor requires."
          << std::endl;
auto *dataPtrZividAllocated = const_cast<void *>(static_cast<const void *>(colors.data()));

std::cout << "Wrapping this block of data in an OpenCV matrix. This is possible since the layout of \n"
          << "Zivid::ColorBGRA exactly matches the layout of CV_8UC4. No copying occurs in this step."
          << std::endl;
const cv::Mat bgraZividAllocated(colors.height(), colors.width(), CV_8UC4, dataPtrZividAllocated);

std::cout << "Displaying image" << std::endl;
cv::imshow("BGRA image Zivid Allocated", bgraZividAllocated);
cv::waitKey(0);

Copy selected data from GPU to CPU memory (user-allocated)

In the above example, ownership of the data was held by the returned Zivid::Array2D<> objects. Alternatively, you may provide a pre-allocated memory buffer to Zivid::PointCloud::copyData(dataPtr). The type of dataPtr defines what shall be copied (PointXYZ, ColorRGBA, etc.).

Now let us look at the exact same use case as above. However, this time, we allow OpenCV to allocate the necessary storage. Then we ask the Zivid API to copy data directly from the GPU into this memory location.

Go to source

source

std::cout << "Allocating the necessary storage with OpenCV API based on resolution info before any capturing"
          << std::endl;
auto bgraUserAllocated = cv::Mat(resolution.height(), resolution.width(), CV_8UC4);

std::cout << "Capturing frame" << std::endl;
auto frame = camera.capture(settings);
auto pointCloud = frame.pointCloud();

std::cout << "Copying data with Zivid API from the GPU into the memory location allocated by OpenCV"
          << std::endl;
pointCloud.copyData(&(*bgraUserAllocated.begin<Zivid::ColorBGRA>()));

std::cout << "Displaying image" << std::endl;
cv::imshow("BGRA image User Allocated", bgraUserAllocated);
cv::waitKey(0);

Transform

You may want to transform the point cloud to change its origin from the camera to the robot base frame or, e.g., scale the point cloud by transforming it from mm to m.

Go to source

source

pointCloud.transform(baseToCameraTransform);
Go to source

source

pointCloud.Transform(transformBaseToCamera);
Go to source

source

point_cloud.transform(base_to_camera_transform)

Downsample

Sometimes you might not need a point cloud with as high spatial resolution as given from the camera. You may then downsample the point cloud.

Note

Sampling (3D) describes a hardware-based sub-/downsample method that reduces the resolution of the point cloud during capture while also reducing the acquisition and capture time.

Downsampling can be done in-place, which modifies the current point cloud.

Go to source

source

pointCloud.downsample(Zivid::PointCloud::Downsampling::by2x2);
Go to source

source

pointCloud.Downsample(Zivid.NET.PointCloud.Downsampling.By2x2);
Go to source

source

point_cloud.downsample(zivid.PointCloud.Downsampling.by2x2)

It is also possible to get the downsampled point cloud as a new point cloud instance, which does not alter the existing point cloud.

Go to source

source

auto downsampledPointCloud = pointCloud.downsampled(Zivid::PointCloud::Downsampling::by2x2);
Go to source

source

var downsampledPointCloud = pointCloud.Downsampled(Zivid.NET.PointCloud.Downsampling.By2x2);
Go to source

source

downsampled_point_cloud = point_cloud.downsampled(zivid.PointCloud.Downsampling.by2x2)

Zivid SDK supports the following downsampling rates: by2x2, by3x3, and by4x4, with the possibility to perform downsampling multiple times.

Normals

Some applications require computing normals from the point cloud.

Go to source

source

std::cout << "Computing normals and copying them to CPU memory" << std::endl;
const auto normals = pointCloud.copyData<Zivid::NormalXYZ>();
Go to source

source

Console.WriteLine("Computing normals and copying them to CPU memory");
var normals = pointCloud.CopyNormalsXYZ();
Go to source

source

print("Computing normals and copying them to CPU memory")
normals = point_cloud.copy_data("normals")

The Normals API computes the normal at each point in the point cloud and copies normals from the GPU memory to the CPU memory. The result is a matrix of normal vectors, one for each point in the input point cloud. The size of normals is equal to the size of the input point cloud.

Visualize

Having the frame allows you to visualize the point cloud.

Go to source

source

std::cout << "Setting up visualization" << std::endl;
Zivid::Visualization::Visualizer visualizer;

std::cout << "Visualizing point cloud" << std::endl;
visualizer.showMaximized();
visualizer.show(frame);
visualizer.resetToFit();

std::cout << "Running visualizer. Blocking until window closes." << std::endl;
visualizer.run();
Go to source

source

Console.WriteLine("Setting up visualization");
using (var visualizer = new Zivid.NET.Visualization.Visualizer())
{
    Console.WriteLine("Visualizing point cloud");
    visualizer.Show(frame);
    visualizer.ShowMaximized();
    visualizer.ResetToFit();

    Console.WriteLine("Running visualizer. Blocking until window closes.");
    visualizer.Run();
}

You can visualize the point cloud from the point cloud object as well.

Go to source

source

std::cout << "Getting point cloud from frame" << std::endl;
auto pointCloud = frame.pointCloud();

std::cout << "Setting up visualization" << std::endl;
Zivid::Visualization::Visualizer visualizer;

std::cout << "Visualizing point cloud" << std::endl;
visualizer.showMaximized();
visualizer.show(pointCloud);
visualizer.resetToFit();

std::cout << "Running visualizer. Blocking until window closes." << std::endl;
visualizer.run();
Go to source

source

Console.WriteLine("Getting point cloud from frame");
var pointCloud = frame.PointCloud;

Console.WriteLine("Setting up visualization");
var visualizer = new Zivid.NET.Visualization.Visualizer();

Console.WriteLine("Visualizing point cloud");
visualizer.Show(pointCloud);
visualizer.ShowMaximized();
visualizer.ResetToFit();

Console.WriteLine("Running visualizer. Blocking until window closes.");
visualizer.Run();

For more information, check out Visualization Tutorial, where we cover point cloud, color image, depth map, and normals visualization, with implementations using third party libraries.

Conclusion

This tutorial shows how to use the Zivid SDK to extract the point cloud, manipulate it, transform it, and visualize it.

Version History

SDK

Changes

2.11.0

Added support for SRGB color space.

2.10.0

Monochrome Capture introduces a faster alternative to Downsample.