Stitch by Transform Tutorial

This tutorial describes how to use the results of calibrating multiple cameras to each other. The results of multi-camera transformation are transformation matrices that can be used to transform one point cloud into the coordinate frame of another. This is a very good first step in stitching point clouds together.

Prerequisites

You should have completed Multi-Camera Calibration Tutorial before going through this tutorial.

Load associated transformation matrices and map to point cloud

To apply the correct transformation matrix, we must map it to its corresponding frame. When we capture directly from the camera, we use its serial number, and when we load from the file, we use the serial number from the ZDF frame. When we use the serial number of a camera or the serial number from the ZDF file, we expect to find an exact match on the YAML file name.

The way we map transformation matrix to point cloud is quite similar to capturing directly from the camera and when loaded from a file. The main difference is when capturing with a camera, we access the serial number and search for a YAML file with that name. Then we map a transformation matrix with camera.

Go to source

source

const auto transformsMappedToCameras =
    getTransformationMatricesFromYAML(transformationMatricesfileList, connectedCameras);
Go to source

source

transforms_mapped_to_cameras = get_transformation_matrices_from_yaml(args.yaml_files, connected_cameras)

On the other hand, when loading from ZDF files, we need to find a ZDF file from the list of files and extract his serial number from the frame. And then, we search for a YAML file on the same file list that uses that serial number as its name. Then we map a transformation matrix with a Frame. In getTransformedPointClouds we apply the transformation and extend the unorganized point cloud, right away.

Go to source

source

Zivid::UnorganizedPointCloud getTransformedPointClouds(
    Zivid::UnorganizedPointCloud stitchedPointCloud;
                Zivid::Matrix4x4 transformationMatrixZivid(yamlFileName);
                Zivid::UnorganizedPointCloud currentPointCloud = frame.pointCloud().toUnorganizedPointCloud();
                stitchedPointCloud.extend(currentPointCloud.transform(transformationMatrixZivid));
Go to source

source

def get_transformed_point_clouds(
    stitched_point_cloud = zivid.UnorganizedPointCloud()
        transformation_matrix = zivid.Matrix4x4(yaml_file)
        current_point_cloud = frame.point_cloud().to_unorganized_point_cloud()
        stitched_point_cloud.extend(current_point_cloud.transform(transformation_matrix))

Apply transformation matrix and stitch transformed point cloud with previous

We have mapped capture with a transformation matrix. Thus, we are ready to transform and stitch.

We can capture and extract the correction transform from transformsMappedToCameras. For each capture we transform and extend the unorganized point cloud.

Go to source

source

for(auto &camera : connectedCameras)
{
    const auto settingsPath = std::string(ZIVID_SAMPLE_DATA_DIR) + "/Settings/" + sanitizedModelName(camera)
                              + "_ManufacturingSpecular.yml";
    std::cout << "Imaging from camera: " << camera.info().serialNumber() << std::endl;
    const auto frame = camera.capture2D3D(Zivid::Settings(settingsPath));
    const auto unorganizedPointCloud = frame.pointCloud().toUnorganizedPointCloud();
    const auto transformationMatrix = transformsMappedToCameras.at(camera.info().serialNumber().toString());
    const auto transformedUnorganizedPointCloud = unorganizedPointCloud.transformed(transformationMatrix);
    stitchedPointCloud.extend(transformedUnorganizedPointCloud);
}
Go to source

source

for camera in connected_cameras:
    settings_path = (
        get_sample_data_path()
        / "Settings"
        / f"{camera.info.model_name.replace('2+', 'Two_Plus').replace('2', 'Two').replace(' ', '_')}_ManufacturingSpecular.yml"
    )
    print(f"Imaging from camera: {camera.info.serial_number}")
    frame = camera.capture(zivid.Settings.load(settings_path))
    unorganized_point_cloud = frame.point_cloud().to_unorganized_point_cloud()
    transformation_matrix = transforms_mapped_to_cameras[camera.info.serial_number]
    transformed_unorganized_point_cloud = unorganized_point_cloud.transformed(transformation_matrix)
    stitched_point_cloud.extend(transformed_unorganized_point_cloud)

Voxel downsample

Before visualization we clean up the stitched point cloud:

Go to source

source

const auto finalPointCloud = stitchedPointCloud.voxelDownsampled(0.5, 1);
Go to source

source

final_point_cloud = stitched_point_cloud.voxel_downsampled(0.5, 1)

Here we’re asking for voxel size of 0.5 mm. The second parameter is useful if more than 2 cameras are involved. Then it can be used to have a majority vote within a pixel. For more information see, Voxel downsample.

Visualize

We use Open3D to visualize and set view point so that it’s placed inside the primary camera.

Copy data

We can copy points directly from GPU to the Open3D tensor. Colors are slightly more complicated as Zivid manages colors as 3 8-bit channels, whereas Open3D uses 3 floats. So, we first have to copy them to CPU, and then assign them to the Open3D tensor while normalizing.

Go to source

source

open3d::t::geometry::PointCloud copyToOpen3D(const Zivid::UnorganizedPointCloud &pointCloud)
{
    auto device = open3d::core::Device("CPU:0");
    auto xyzTensor =
        open3d::core::Tensor({ static_cast<int64_t>(pointCloud.size()), 3 }, open3d::core::Dtype::Float32, device);
    auto rgbTensor =
        open3d::core::Tensor({ static_cast<int64_t>(pointCloud.size()), 3 }, open3d::core::Dtype::Float32, device);

    pointCloud.copyData(reinterpret_cast<Zivid::PointXYZ *>(xyzTensor.GetDataPtr<float>()));

    // Open3D does not store colors in 8-bit
    auto *rgbPtr = rgbTensor.GetDataPtr<float>();
    auto rgbaColors = pointCloud.copyColorsRGBA_SRGB();
    for(size_t i = 0; i < pointCloud.size(); ++i)
    {
        rgbPtr[3 * i] = static_cast<float>(rgbaColors(i).r) / 255.0f;
        rgbPtr[3 * i + 1] = static_cast<float>(rgbaColors(i).g) / 255.0f;
        rgbPtr[3 * i + 2] = static_cast<float>(rgbaColors(i).b) / 255.0f;
    }

    open3d::t::geometry::PointCloud cloud(device);
    cloud.SetPointPositions(xyzTensor);
    cloud.SetPointColors(rgbTensor);
    return cloud;
}

Go to source

source

def copy_to_open3d_point_cloud(
    xyz: np.ndarray, rgb: np.ndarray, normals: Optional[np.ndarray] = None
) -> o3d.geometry.PointCloud:
    """Copy point cloud data to Open3D PointCloud object.

    Args:
        rgb: RGB image
        xyz: A numpy array of X, Y and Z point cloud coordinates
        normals: Ordered array of normal vectors, mapped to xyz

    Returns:
        An Open3D PointCloud
    """
    xyz = np.nan_to_num(xyz).reshape(-1, 3)
    if normals is not None:
        normals = np.nan_to_num(normals).reshape(-1, 3)
    rgb = rgb.reshape(-1, rgb.shape[-1])[:, :3]

    open3d_point_cloud = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(xyz))
    open3d_point_cloud.colors = o3d.utility.Vector3dVector(rgb / 255)
    if normals is not None:
        open3d_point_cloud.normals = o3d.utility.Vector3dVector(normals)

    return open3d_point_cloud

Save stitched point cloud

Finally we can optionally save the point cloud:

Go to source

source

const std::string &fileName = stitchedPointCloudFileName;
std::cout << "Saving " << finalPointCloud.size() << " data points to " << fileName << std::endl;

using PLY = Zivid::Experimental::PointCloudExport::FileFormat::PLY;
const auto colorSpace = Zivid::Experimental::PointCloudExport::ColorSpace::sRGB;
Zivid::Experimental::PointCloudExport::exportUnorganizedPointCloud(
    finalPointCloud, PLY{ fileName, PLY::Layout::unordered, colorSpace });
Go to source

source

print(f"Saving {len(final_point_cloud.size())} data points to {args.output_file}")
export_unorganized_point_cloud(
    final_point_cloud, PLY(args.output_file, layout=PLY.Layout.unordered, color_space=ColorSpace.srgb)
)