Multi-Camera Calibration Tutorial

This tutorial describes how to use Zivid SDK to calibrate multiple cameras against each other. The result of this calibration is a transformation matrix from each camera to a primary camera. While the primary camera is, by default, the first camera connected to, it can easily be controlled via its Serial Number.

Prerequisites

You should have installed Zivid SDK and C++ samples. For more details see zivid-cpp-samples or zivid-python-samples. This tutorial will not go into details about the basics of the SDK, such as initialization and capture. Please see Capture Tutorial for that.

Connect to cameras

In this tutorial, we will connect to all available cameras. We can do so via

Go to source

source

auto cameras = zivid.cameras();
std::cout << "Number of cameras found: " << cameras.size() << std::endl;

auto connectedCameras = connectToAllAvailableCameras(cameras);
if(connectedCameras.size() < 2)
{
    throw std::runtime_error("At least two cameras need to be connected");
}
std::cout << "Number of connected cameras: " << connectedCameras.size() << std::endl;
Go to source

source

cameras = app.cameras()
print(f"Number of cameras found: {len(cameras)}")

connected_cameras = connect_to_all_available_cameras(cameras)
if len(connected_cameras) < 2:
    raise RuntimeError("At least two cameras need to be connected")
print(f"Number of connected cameras: {len(connected_cameras)}")

Capture calibration object

We are now ready to capture the calibration object.

Capture in the sample is performed with the special calibration board capture function.

Go to source

source

const auto frame = Zivid::Calibration::captureCalibrationBoard(camera);
Go to source

source

frame = zivid.calibration.capture_calibration_board(camera)

When we load the point cloud from the file, we simply replace this line of code with:

Go to source

source

const auto frame = Zivid::Frame(fileName);
Go to source

source

frame = zivid.Frame(file_name)

Detect checkerboard feature points

The calibration object we use in this tutorial is a checkerboard. Before we can run calibration, we must detect feature points from the checkerboard from all cameras.

Go to source

source

const auto detectionResult = Zivid::Calibration::detectCalibrationBoard(frame);
Go to source

source

detection_result = zivid.calibration.detect_calibration_board(frame)

We may, at this point, verify that the capture had good enough quality. detectionResult is of a type that can be tested directly. It overloads the bool operator to provide this information. When it passes the quality test, we save the detection result and the serial number of the camera used.

Go to source

source

if(detectionResult)
{
    Detection currentDetection(serial, detectionResult);
    detectionsList.push_back(currentDetection);
}
else
{
    throw std::runtime_error(
        "Could not detect checkerboard. Please ensure it is visible from all cameras.");
}
Go to source

source

if detection_result:
    detections_list.append(Detection(serial, detection_result))
else:
    raise RuntimeError("Could not detect checkerboard. Please ensure it is visible from all cameras.")

Perform Multi-Camera Calibration

Now that we have detected all feature points in all captures from all cameras, we can perform the multi-camera calibration.

Go to source

source

const auto results = Zivid::Calibration::calibrateMultiCamera(detectionResultsList);
Go to source

source

results = zivid.calibration.calibrate_multi_camera(detection_results_list)

The returned results can be checked directly as to whether or not calibration was successful. Again, the type overloads the bool operator.

Go to source

source

if(results)
{
    std::cout << "Multi-camera calibration OK." << std::endl;
}
else
{
    std::cout << "Multi-camera calibration FAILED." << std::endl;
}
Go to source

source

if results:
    print("Multi-camera calibration OK.")
else:
    print("Multi-camera calibration FAILED.")

results contains two vectors:

  1. transforms - contains all transformation matrices

  2. residuals - contains an indication of the calibration error

Go to source

source

const auto &transforms = results.transforms();
const auto &residuals = results.residuals();
Go to source

source

transforms = results.transforms()
residuals = results.residuals()

Save transformation matrices to YAML

Later we will use the results, so we store the transformation in YAML files. To do this, we use our API. It is important to keep track of which transformation matrix belongs to which camera. More precisely, the pose of the camera during calibration. Thus, we use the camera’s serial number as an identifier, and we use it on the file name.

Go to source

source

transforms[i].save(transformationMatricesSavePath + "/" + detectionsList[i].serialNumber + ".yaml");
Go to source

source

assert_affine_matrix_and_save(
    transform, transformation_matrices_save_path / f"{detections_list[i].serial_number}.yaml"
)

Conclusion

This tutorial shows how to use the Zivid SDK to calibrate multiple cameras. You can now use the transformation matrices to stitch point clouds together from multiple cameras. See Stitch by Transform Tutorial