Transform via ArUco marker
This tutorial demonstrates how to estimate the pose of the ArUco marker and transform a point cloud using a 4x4 homogeneous transformation matrix to the ArUco marker coordinate system.
Tip
Zivid calibration board contains an ArUco marker.
This tutorial uses the point cloud of an ArUco marker displayed in the image below.
We can open the original point cloud
in Zivid Studio and inspect it.
Note
The original point cloud is also in Sample Data.
Now, we can manually set the Z Range from 540 mm to 735 mm in the Depth view. This allows us to see that the marker is at approximately 570 mm distance, and that there is an angle between the camera and the marker frame.
First, we load a point cloud of an ArUco marker.
const auto arucoMarkerFile = std::string(ZIVID_SAMPLE_DATA_DIR) + "/CalibrationBoardInCameraOrigin.zdf";
std::cout << "Reading ZDF frame from file: " << arucoMarkerFile << std::endl;
const auto frame = Zivid::Frame(arucoMarkerFile);
auto pointCloud = frame.pointCloud();
Since we are using OpenCV, we need to convert the 2D image to OpenCV image format.
std::cout << "Converting to OpenCV image format" << std::endl;
const auto bgrImage = pointCloudToColorBGR(pointCloud);
const auto grayImage = colorBGRToGray(bgrImage);
Then we configure the ArUco marker.
std::cout << "Configuring ArUco marker" << std::endl;
const auto markerDictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_100);
std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners;
cv::Ptr<cv::aruco::DetectorParameters> detectorParameters = cv::aruco::DetectorParameters::create();
detectorParameters->cornerRefinementMethod = cv::aruco::CORNER_REFINE_SUBPIX;
We then detect the ArUco marker in the 2D image and display it.
std::cout << "Detecting ArUco Marker" << std::endl;
cv::aruco::detectMarkers(grayImage, markerDictionary, markerCorners, markerIds, detectorParameters);
std::cout << "Displaying detected ArUco marker" << std::endl;
cv::aruco::drawDetectedMarkers(bgrImage, markerCorners);
displayBGR(bgrImage, "ArucoMarkerDetected");
Then we estimate the pose of the ArUco marker.
std::cout << "Estimating pose of detected ArUco marker" << std::endl;
const auto transformMarkerToCamera = estimateArUcoMarkerPose(pointCloud, markerCorners[0]);
std::cout << "Camera pose in ArUco marker frame:" << std::endl;
std::cout << transformMarkerToCamera << std::endl;
const auto transformCameraToMarker = transformMarkerToCamera.inverse();
Lastly, we transform the point cloud to the ArUco marker coordinate system and save the transformed point cloud to disk.
std::cout << "Transforming point cloud from camera frame to ArUco marker frame" << std::endl;
pointCloud.transform(transformCameraToMarker);
std::cout << "Saving transformed point cloud to file: " << arucoMarkerTransformedFile << std::endl;
frame.save(arucoMarkerTransformedFile);
Hint
Learn more about Position, Orientation and Coordinate Transformations.
Now we can open the transformed point cloud
in Zivid Studio and inspect it.
Note
Zoom out in Zivid Studio to find the data because the viewpoint origin is inadequate for transformed point clouds.
We can now manually set the Z Range from -35 mm to 1 mm in the Depth view. This way we can filter out all data except the calibration board and the object located next to it. This allows us to see that we have the same Z value across the calibration board, and from the color gradient we can check that the value is 0. This means that the origin of the point cloud is on the ArUco marker.