Stitch by Transform Tutorial
This tutorial describes how to use the results of calibrating multiple cameras to each other. The results of multi-camera transformation are transformation matrices that can be used to transform one point cloud into the coordinate frame of another. This is a very good first step in stitching point clouds together.
Prerequisites
You should have completed Multi-Camera Calibration Tutorial before going through this tutorial.
Load associated transformation matrices and map to point cloud
To apply the correct transformation matrix, we must map it to its corresponding frame. When we capture directly from the camera, we use its serial number, and when we load from the file, we use the serial number from the ZDF frame. When we use the serial number of a camera or the serial number from the ZDF file, we expect to find an exact match on the YAML file name.
The way we map transformation matrix to point cloud is quite similar to capturing directly from the camera and when loaded from a file. The main difference is when capturing with a camera, we access the serial number and search for a YAML file with that name. Then we map a transformation matrix with camera.
On the other hand, when loading from ZDF files, we need to find a ZDF file from the list of files and extract his serial number from the frame.
And then, we search for a YAML file on the same file list that uses that serial number as its name.
Then we map a transformation matrix with a Frame.
In getTransformedPointClouds we apply the transformation and extend the unorganized point cloud, right away.
Zivid::UnorganizedPointCloud getTransformedPointClouds(
Zivid::UnorganizedPointCloud stitchedPointCloud;
Zivid::Matrix4x4 transformationMatrixZivid(yamlFileName);
Zivid::UnorganizedPointCloud currentPointCloud = frame.pointCloud().toUnorganizedPointCloud();
stitchedPointCloud.extend(currentPointCloud.transform(transformationMatrixZivid));
Apply transformation matrix and stitch transformed point cloud with previous
We have mapped capture with a transformation matrix. Thus, we are ready to transform and stitch.
We can capture and extract the correction transform from transformsMappedToCameras.
For each capture we transform and extend the unorganized point cloud.
for(auto &camera : connectedCameras)
{
const auto settingsPath = std::string(ZIVID_SAMPLE_DATA_DIR) + "/Settings/" + sanitizedModelName(camera)
+ "_ManufacturingSpecular.yml";
std::cout << "Imaging from camera: " << camera.info().serialNumber() << std::endl;
const auto frame = camera.capture2D3D(Zivid::Settings(settingsPath));
const auto unorganizedPointCloud = frame.pointCloud().toUnorganizedPointCloud();
const auto transformationMatrix = transformsMappedToCameras.at(camera.info().serialNumber().toString());
const auto transformedUnorganizedPointCloud = unorganizedPointCloud.transformed(transformationMatrix);
stitchedPointCloud.extend(transformedUnorganizedPointCloud);
}
for camera in connected_cameras:
settings_path = (
get_sample_data_path()
/ "Settings"
/ f"{camera.info.model_name.replace('2+', 'Two_Plus').replace('2', 'Two').replace('3', 'Three').replace(' ', '_')}_ManufacturingSpecular.yml"
)
print(f"Imaging from camera: {camera.info.serial_number}")
frame = camera.capture(zivid.Settings.load(settings_path))
unorganized_point_cloud = frame.point_cloud().to_unorganized_point_cloud()
transformation_matrix = transforms_mapped_to_cameras[camera.info.serial_number]
transformed_unorganized_point_cloud = unorganized_point_cloud.transformed(transformation_matrix)
stitched_point_cloud.extend(transformed_unorganized_point_cloud)
Voxel downsample
Before visualization we clean up the stitched point cloud:
Here we’re asking for voxel size of 0.5 mm. The second parameter is useful if more than 2 cameras are involved. Then it can be used to have a majority vote within a pixel. For more information see, Voxel downsample.
Visualize
We use the Zivid::Visualization::Visualizer to visualize the stitched point cloud.
void visualizePointCloud(const Zivid::UnorganizedPointCloud &unorganizedPointCloud)
{
Zivid::Visualization::Visualizer visualizer;
visualizer.showMaximized();
visualizer.show(unorganizedPointCloud);
visualizer.resetToFit();
std::cout << "Running visualizer. Blocking until window closes." << std::endl;
visualizer.run();
}
Save stitched point cloud
Finally we can optionally save the point cloud:
const std::string &fileName = stitchedPointCloudFileName;
std::cout << "Saving " << finalPointCloud.size() << " data points to " << fileName << std::endl;
using PLY = Zivid::Experimental::PointCloudExport::FileFormat::PLY;
const auto colorSpace = Zivid::Experimental::PointCloudExport::ColorSpace::sRGB;
Zivid::Experimental::PointCloudExport::exportUnorganizedPointCloud(
finalPointCloud, PLY{ fileName, PLY::Layout::unordered, colorSpace });