Zivid CLI Tool For Hand-Eye Calibration

This tutorial shows how to use the Command Line Interface (CLI) tool for Hand-Eye calibration. Step by step instructions is provided with screenshots below.


The ZividExperimentalHandEyeCalibration.exe CLI tool using *.yaml file format for robot poses is experimental. It will eventually be replaced by a GUI.



Acquire the dataset

If you haven’t read our complete tutorial on Hand-Eye Calibration, we encourage you to do so. The bare minimum for acquiring the dataset is to check out the Hand-Eye Calibration Process to learn more about the required robot poses, and to learn How To Get Good Quality Data On Zivid Calibration Board. Also, it can be very helpful to read on Cautions And Recommendations.

The dataset assumes 10 - 20 pairs of Zivid checkerboard point clouds in .zdf file format and corresponding robot poses in .yaml file format. The naming convention is:

  • Point clouds: img01.zdf, img02.zdf, img03.zdf, and so on

  • Robot poses: pos01.yaml, pos02.yaml, pos03.yaml, and so on


Translation part of robot pose must be in mm.

Here’s an example of a robot pose available for download - pos01.yaml. To learn how to write/read files in .yaml format, check out the OpenCV YAML file storage class. The dataset folder should look similar to this:


Robot pose files are in .yaml file format, unlike camera settings and configuration file which are in .yml file format.


Run the hand-eye calibration CLI tool

Launch the Command Prompt by pressing Win + R keys on the keyboard, then type cmd and press Enter.


Navigate to the folder where you installed Zivid software:

cd C:\Program Files\Zivid\bin>

The inputs and outputs of the ZividExperimentalHandEyeCalibration.exe CLI tool can be displayed by typing the following command:

ZividExperimentalHandEyeCalibration.exe -h

To run the ZividExperimentalHandEyeCalibration.exe CLI tool you must specify:

  • The type of calibration (eye-in-hand or eye-to-hand)

  • The path to the directory containing the dataset (.zdf files and .yaml robot poses).

It is also handy to specify the location where you want to save the resulting hand-eye transform and residuals, see the example below:

SET dataset=C:\Users\Christian\Directory_where_dataset_is_located
ZividExperimentalHandEyeCalibration.exe --eth -d "%dataset%" --tf "%dataset%\tf.yaml" --rf "%dataset%\rf.yaml"

During the execution, the algorithm outputs if it is able to detect the checkerboard (“OK”) or not (“FAILED”). After the detection phase, it outputs the resulting hand-eye transform (4x4 homogeneous transformation matrix). Lastly, it outputs Hand-Eye Calibration Residuals for every pose; see example output below.


The resulting homogeneous transformation matrix (eyeInHandTransform.yaml) can then be used to transform the picking point coordinates or the entire point cloud from the camera frame to the robot base frame. To do this, check out How To Use The Result Of Hand-Eye Calibration.