Hand-Eye Calibration with HALCON
Introduction
This guide describes how to integrate hand-eye calibration using Halcon with Zivid cameras. It is generally recommended to use the Hand-Eye GUI for hand-eye calibration, but this guide provides an alternative approach using HALCON and HDevelop. The operators can also be used in HALCON’s C++ and C# interfaces. Sample code in HDevelop can be found in our samples repository:
The sample data is required to run the HDevelop samples, but it can be replaced with your own data.
MVTec Documentation/Principles of Hand-Eye Calibration Problem and HALCON
MVTec provides a comprehensive documentation which also covers many more details and specifics about hand-eye calibration and machine vision in general. The documentation for hand-eye calibration is outlined in the Solution Guide III-C: 3D Vision and the relevant section is Chapter 8: Robot Vision.
There are many ways in which to achieve successful hand-eye calibration. While none of the methods is perfect, which to choose depends on the specific use case and requirements. In principle, the fundamentals are similar. The goal is to find a homogenous transform between the coordinate system of the camera and the coordinate system of the robot base or robot flange. Since the data collected by the Zivid cameras is in 3D, the transformation can simply be calculated from the pose of the camera in the robot base coordinate system. For further information see Hand-Eye Calibration Theory.
For information on how to use HALCON/HDevelop you can also refer to the page How to run a HALCON Sample.
Prerequisites
Zivid camera
HALCON software installed
Checkout examples https:://github.com/zivid/zivid-halcon-samples
Download Sample Data: https://support.zivid.com/en/latest/api-reference/samples/sample-data.html
3D calibration object or HALCON calibration plate
Robot with known pose parameterization (e.g. Euler angles XYX, etc.)
An understanding of hand-eye calibration concepts
Camera Settings
In general we recommend using preset settings wherever possible. This also applies for hand-eye calibration. Ideally, the same settings used for hand-eye calibration should also be used for the application itself, to ensure the best possible performance. In the sample data the manufacturing presets were used. If you are struggling with your settings, see Camera Settings for Hand-Eye Calibration.
Calibration Object
To do a hand-eye calibration between the two coordinate systems, we need a set of common poses or point in both coordinate systems. To generate this set of pose/point pairs, a calibration object is required. See the page calibration object, for more details. One can either use the HALCON calibration plate, or a 3D calibration object with known geometry and no symmetries.
Note
Since the data collected by the Zivid cameras is 3D, it is not necessary to use hand-eye calibration methods that rely on 2D camera intrinsics.
Robot Poses
Every robot manufacturer, and indeed sometimes even different robot models from the same manufacturer, will have their own way of representing and exporting robot poses. While the position in cartesian coordinates are depicted as X, Y, and Z (in a right-handed coordinate system), the orientation can be represented in many different ways. Especially Euler angles are parameterized in many different ways, and it is important to know which convention the robot uses. Quaternions and Homogeneous transformation matrices are easier to work with, as they are unambiguous, but may not always be available.
HALCON supports numerous ways for representing orientation in poses and can convert between them. This is outlined in the documentation for the operator create_pose. The operator pose_to_hom_mat3d can be used to convert the robot poses to homogeneous transformation matrices. Mathematical description of poses can also be found in Pose Conversions.
There are a few ways in which robot poses can be recorded. Unfortunately, HALCON provides no interface for any robot manufacturer, so one would have to program this using socket connections or other interfaces. For our examples we manually recorded the robot poses and entered them into HALCON.
Hand-Eye Calibration Process
Before starting the calibration process, ensure:
Camera is warmed up
Infield correction has been performed
Calibration object is visible in the camera’s field of view for all robot poses
Steps in Hand-Eye Calibration
The hand-eye calibration process involves the following steps:
Capturing calibration object poses from multiple robot configurations
Detecting calibration object features in each image
Registering corresponding robot end-effector poses
Saving the 3D-Scene and robot pose pairs
Whether doing eye-in-hand or eye-to-hand calibration, the process is similar. It will comprise of the steps above. The difference is that in eye-in-hand calibration, the camera is mounted on the robot end-effector, while in eye-to-hand calibration, the camera is fixed in the environment and observes the robot. So in the eye-in-hand case the calibration object is stationary, while in the eye-to-hand case the calibration object moves with the robot. In both cases captures of the calibration object with different poses for robot and therefore also different poses in the camera field of view are required.
For the Data Collection therefore these points hold for both scenarios cases:
Moving the robot to 10-20 distinct poses
Ensuring diverse viewing angles and complete joint utilization
Capturing the calibration object at each pose
Recording the robot pose from the controller
See Getting a Good Dataset for more details and tips on how to get a good dataset for hand-eye calibration.
In our example, point clouds are acquired in ZividStudio then exported as *.zdf and *.ply (with normals) files and corresponding robot poses saved as a text file. The *.zdf files are required for extracting of the estimated intrinsic parameters. The *.ply files are used for capturing the pose of the calibration object in each image.
Note
Since HALCON uses the normals for the pose estimation of the calibration object, it is important to ensure that the normals are included in the exported point cloud files.
For verification and debugging it is recommended to save the dataset of captures and pose pairs.
Camera Intrinsics
When using the HALCON calibration plate, we recommend loading the intrinsics obtained from the ZDF file of the capture of the calibration plate in HALCON. More information can be found in Camera Intrinsics section of the Knowledge Base. When using the zdf file and our corresponding python script to save the intrinsics in the HALCON format one can load the intrinsics as below.
read_cam_par(ZividDataDir + '/HalconHandEyeCalibration/estimated_intrinsics.dat', InputCameraParameters)
Feature Extraction
When using the HALCON calibration plate, the operator find_calib_object can be used to extract the pose of the calibration object in each captured image.
create_calib_data ('hand_eye_moving_cam', 1, 1, CalibDataID)
set_calib_data_calib_object (CalibDataID, 0, CalibrationPlateFile)
find_calib_object (GrayImage, CalibDataID, 0, 0, I, ['sigma'], [2])
In the case of using a 3D calibration object, on has to find the pose of the object within each captured scene. Extract features from the calibration object in each captured image to establish point correspondences across multiple viewpoints. For the extraction of the pose of the calibration object, you can use the operators create_surface_model and find_surface_model.
create_surface_model (ObjectModel3DCalibrationObject, 0.03, [], [], SurfaceModelID)
find_surface_model (SurfaceModelID, ObjectModel3DScene, 0.05, 0.2, 0, 'false', [], [], PoseObject, Score, SurfaceMatchingResultID)
Transformation Computation
Now in HALCON we collect pose pairs of the calibration object and the robot. These we
create_calib_data ('hand_eye_moving_cam', 0, 0, CalibDataID)
for j := 1 to NumberOfCaptures by 1
set_calib_data_observ_pose (CalibDataID, 0, 0, j, PoseObjects[j])
set_calib_data (CalibDataID, 'tool', j, 'tool_in_base_pose', RobotPoses[j])
endfor
Once the robot pose and calibration object pose pairs are collected, the transformation can be computed using HALCON hand_eye_calibration.
calibrate_hand_eye (CalibDataID, Errors)
Verification
After calibration, verify the results by:
Checking residuals for each calibration pose
Performing a touch test with the robot
Visually inspecting point cloud alignment
This can be done in HALCON/HDevelop using operators and 3D visualization methods. One can also compare the Homogeneous transformation matrix obtained with that obtained from the Hand-Eye GUI. See Hand-Eye Calibration Verification for more information on evaluating calibration quality.
Next Steps
Continue with How To Use the Result of Hand-Eye Calibration to learn how to apply your calibration results.