Multi-Camera Calibration Tutorial
This tutorial describes how to use Zivid SDK to calibrate multiple cameras against each other. The result of this calibration is a transformation matrix from each camera to a primary camera. While the primary camera is, by default, the first camera connected to, it can easily be controlled via its Serial Number.
Prerequisites
You should have installed Zivid SDK and C++ samples. For more details see zivid-cpp-samples or zivid-python-samples. This tutorial will not go into details about the basics of the SDK, such as initialization and capture. Please see Capture Tutorial for that.
Connect to cameras
In this tutorial, we will connect to all available cameras. We can do so via
auto cameras = zivid.cameras();
std::cout << "Number of cameras found: " << cameras.size() << std::endl;
auto connectedCameras = connectToAllAvailableCameras(cameras);
if(connectedCameras.size() < 2)
{
throw std::runtime_error("At least two cameras need to be connected");
}
std::cout << "Number of connected cameras: " << connectedCameras.size() << std::endl;
Capture calibration object
We are now ready to capture the calibration object.
Capture in the sample is performed with the special calibration board capture function.
When we load the point cloud from the file, we simply replace this line of code with:
Detect checkerboard feature points
The calibration object we use in this tutorial is a checkerboard. Before we can run calibration, we must detect feature points from the checkerboard from all cameras.
We may, at this point, verify that the capture had good enough quality. detectionResult
is of a type that can be tested directly.
It overloads the bool operator to provide this information.
When it passes the quality test, we save the detection result and the serial number of the camera used.
Perform Multi-Camera Calibration
Now that we have detected all feature points in all captures from all cameras, we can perform the multi-camera calibration.
The returned results
can be checked directly as to whether or not calibration was successful. Again, the type overloads the bool operator.
results
contains two vectors:
transforms
- contains all transformation matricesresiduals
- contains an indication of the calibration error
Save transformation matrices to YAML
Later we will use the results, so we store the transformation in YAML files. To do this, we use our API. It is important to keep track of which transformation matrix belongs to which camera. More precisely, the pose of the camera during calibration. Thus, we use the camera’s serial number as an identifier, and we use it on the file name.
Conclusion
This tutorial shows how to use the Zivid SDK to calibrate multiple cameras. You can now use the transformation matrices to stitch point clouds together from multiple cameras. See Stitch by Transform Tutorial