Multi-Camera CalibrationΒΆ

This article is an in-depth description of the multi-camera calibration problem, solution and application.

In order to introduce the real problem, let us first briefly look at the application. The Zivid camera can only see one side of an object, like any camera or a human eye, there is no x-ray. If we want to see an object from behind we have to rotate the object or move the camera behind the object. So, to get a point cloud of an object from all sides we need to capture from all sides.

Now to the problem. When a point cloud is returned by the Zivid SDK, the origin of its coordinate frame is inside the camera. If you capture a scene from multiple locations you will get point clouds whose reference frames all have a different pose relative to the world frame. So, how do you visualize the result, or work on it in any way?

In summary:

Problem

Point clouds from different locations relative to the object are not given relative to the same world coordinate frame.

Solution

  1. Calibrate camera poses against each other and provide transformation matrices.

  2. Use transformation matrices to transform point clouds into one common coordinate frame.

This article focus on the practical side of the calibration problem. The actual calibration is performed by the Zivid SDK, and there is a separate Multi-camera calibration programming tutorial with Multi-camera calibration samples that cover this. For more information about the theory behind this kind of calibration, please see Multi Camera Calibration Theory.

For a detailed description of the topic, please read the following pages: