Transform via ArUco marker

This tutorial demonstrates how to estimate the pose of the ArUco marker and transform a point cloud using a 4x4 homogeneous transformation matrix to the ArUco marker coordinate system.

Tip

Zivid calibration board contains an ArUco marker.

This tutorial uses the point cloud of an ArUco marker displayed in the image below.

ArUco marker 2D image

We can open the original point cloud in Zivid Studio and inspect it.

Note

The original point cloud is also in Sample Data.

Now, we can manually set the Z Range from 540 mm to 735 mm in the Depth view. This allows us to see that the marker is at approximately 570 mm distance, and that there is an angle between the camera and the marker frame.

ArUco marker in camera coordinate system

First, we load a point cloud of an ArUco marker.

Go to source

source

const auto arucoMarkerFile = std::string(ZIVID_SAMPLE_DATA_DIR) + "/CalibrationBoardInCameraOrigin.zdf";
std::cout << "Reading ZDF frame from file: " << arucoMarkerFile << std::endl;
const auto frame = Zivid::Frame(arucoMarkerFile);
auto pointCloud = frame.pointCloud();
Go to source

source

var calibrationBoardFile = Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
               + "/Zivid/CalibrationBoardInCameraOrigin.zdf";
Console.WriteLine("Reading ZDF frame from file: " + calibrationBoardFile);
var frame = new Zivid.NET.Frame(calibrationBoardFile);
var pointCloud = frame.PointCloud;
Go to source

source

data_file = get_sample_data_path() / "CalibrationBoardInCameraOrigin.zdf"
print(f"Reading ZDF frame from file: {data_file}")
frame = zivid.Frame(data_file)
point_cloud = frame.point_cloud()

Then we configure the ArUco marker.

Go to source

source

std::cout << "Configuring ArUco marker" << std::endl;
const auto markerDictionary = Zivid::Calibration::MarkerDictionary::aruco4x4_50;
std::vector<int> markerId = { 1 };
Go to source

source

Console.WriteLine("Configuring ArUco marker");
var markerDictionary = Zivid.NET.MarkerDictionary.Aruco4x4_50;
var markerId = new List<int> { 1 };
Go to source

source

print("Configuring ArUco marker")
marker_dictionary = zivid.calibration.MarkerDictionary.aruco4x4_50
marker_id = [1]

We then detect the ArUco marker.

Go to source

source

std::cout << "Detecting ArUco marker" << std::endl;
const auto detectionResult = Zivid::Calibration::detectMarkers(frame, markerId, markerDictionary);
Go to source

source

Console.WriteLine("Detecting ArUco marker");
var detectionResult = Detector.DetectMarkers(frame, markerId, markerDictionary);
Go to source

source

print("Detecting ArUco marker")
detection_result = zivid.calibration.detect_markers(frame, marker_id, marker_dictionary)

Then we estimate the pose of the ArUco marker.

Go to source

source

std::cout << "Estimating pose of detected ArUco marker" << std::endl;
const auto transformCameraToMarker = detectionResult.detectedMarkers()[0].pose().toMatrix();
Go to source

source

Console.WriteLine("Estimating pose of detected ArUco marker");
var transformCameraToMarker = new Zivid.NET.Matrix4x4(detectionResult.DetectedMarkers()[0].Pose().ToMatrix());
Go to source

source

print("Estimating pose of detected ArUco marker")
transform_camera_to_marker = detection_result.detected_markers()[0].pose.to_matrix()

Before transforming the point cloud, we invert the transformation matrix in order to get the pose of the camera in the ArUco marker coordinate system.

Go to source

source

std::cout << "Camera pose in ArUco marker frame:" << std::endl;
const auto transformMarkerToCamera = transformCameraToMarker.inverse();
Go to source

source

Console.WriteLine("Camera pose in ArUco marker frame:");
var transformMarkerToCamera = transformCameraToMarker.Inverse();
Go to source

source

print("Camera pose in ArUco marker frame:")
transform_marker_to_camera = np.linalg.inv(transform_camera_to_marker)

After transforming we save the pose to a YAML file.

Go to source

source

const auto transformFile = "ArUcoMarkerToCameraTransform.yaml";
std::cout << "Saving a YAML file with Inverted ArUco marker pose to file: " << transformFile << std::endl;
transformMarkerToCamera.save(transformFile);
Go to source

source

var transformFile = "ArUcoMarkerToCameraTransform.yaml";
Console.WriteLine("Saving a YAML file with Inverted ArUco marker pose to file: " + transformFile);
transformMarkerToCamera.Save(transformFile);
Go to source

source

transform_file = Path("ArUcoMarkerToCameraTransform.yaml")
print("Saving a YAML file with Inverted ArUco marker pose to file: ")
assert_affine_matrix_and_save(transform_marker_to_camera, transform_file)

This is the content of the YAML file:

__version__:
    serializer: 1
    data: 1
FloatMatrix:
    Data:
        [
            [0.978564, 0.0506282, 0.1996225, 21.54072],
            [-0.04527659, -0.892707, 0.4483572, -208.0268],
            [0.2009039, -0.4477845, -0.8712788, 547.6984],

After that, we transform the point cloud to the ArUco marker coordinate system.

Go to source

source

std::cout << "Transforming point cloud from camera frame to ArUco marker frame" << std::endl;
pointCloud.transform(transformMarkerToCamera);
Go to source

source

Console.WriteLine("Transforming point cloud from camera frame to ArUco marker frame");
pointCloud.Transform(transformMarkerToCamera);
Go to source

source

print("Transforming point cloud from camera frame to ArUco marker frame")
point_cloud.transform(transform_marker_to_camera)

Before saving the transformed point cloud, we can convert it to an OpenCV 2D image format and draw the detected ArUco marker.

Go to source

source

std::cout << "Converting to OpenCV image format" << std::endl;
const auto bgraImage = pointCloudToColorBGRA(pointCloud);
std::cout << "Displaying detected ArUco marker" << std::endl;
const auto bgr = drawDetectedMarker(bgraImage, detectionResult);
displayBGR(bgr, "ArucoMarkerDetected");
Go to source

source

print("Converting to OpenCV image format")
bgra_image = point_cloud.copy_data("bgra")
print("Displaying detected ArUco marker")
bgr = _draw_detected_marker(bgra_image, detection_result)
display_bgr(bgr, "ArucoMarkerDetected")

Here we can see the image that will be displayed and we can observe where the coordinate system of the checkerboard is.

aruco marker

Lastly we save the transformed point cloud to disk.

Go to source

source

const auto arucoMarkerTransformedFile = "CalibrationBoardInArucoMarkerOrigin.zdf";
std::cout << "Saving transformed point cloud to file: " << arucoMarkerTransformedFile << std::endl;
frame.save(arucoMarkerTransformedFile);
Go to source

source

var arucoMarkerTransformedFile = "CalibrationBoardInArucoMarkerOrigin.zdf";
Console.WriteLine("Saving transformed point cloud to file: " + arucoMarkerTransformedFile);
frame.Save(arucoMarkerTransformedFile);
Go to source

source

aruco_marker_transformed_file = "CalibrationBoardInArucoMarkerOrigin.zdf"
print(f"Saving transformed point cloud to file: {aruco_marker_transformed_file}")
frame.save(aruco_marker_transformed_file)

Now we can open the transformed point cloud in Zivid Studio and inspect it.

Note

Zoom out in Zivid Studio to find the data because the viewpoint origin is inadequate for transformed point clouds.

We can now manually set the Z Range from -35 mm to 1 mm in the Depth view. This way we can filter out all data except the calibration board and the object located next to it. This allows us to see that we have the same Z value across the calibration board, and from the color gradient we can check that the value is 0. This means that the origin of the point cloud is on the ArUco marker.

ArUco marker in camera coordinate system

To transform the point cloud to the ArUco marker coordinate system, you can run our code sample.

Sample: TransformPointCloudViaArucoMarker.cpp

./TransformPointCloudViaArucoMarker

Sample: TransformPointCloudViaArucoMarker.cs

./TransformPointCloudViaArucoMarker

Sample: transform_point_cloud_via_aruco_marker.py

python transform_point_cloud_via_aruco_marker.py

Tip

Modify the code sample if you wish to use this in your own setup:

  1. Replace the ZDF file with your actual camera and settings.

  2. Place the ArUco marker in your scene.

  3. Run sample!