Hand-Eye Calibration Problem

This tutorial aims to describe the problem that the hand-eye calibration solves as well as to introduce robot poses and coordinate systems that are required for the hand-eye calibration. If you are not familiar with (robot) poses and coordinate systems, check out Position, Orientation and Coordinate Transformations.

How can a robot pick an object?

Let’s start with a robot that doesn’t involve a camera. Its two main coordinate systems are:

  1. the robot base coordinate system

  2. the end-effector coordinate system

../../../_images/hand-eye-robot-ee-robot-base-coordinate-systems.png

To be able to pick an object, the robot controller needs to know the object’s pose (position and orientation) relative to the robot base frame. It also requires knowledge about the robot’s geometry.


This combined information is sufficient to compute the joint angles that will move the end-effector/gripper towards the object.

../../../_images/hand-eye-robot-robot-to-object.png

Now, let’s assume that the pose of the object relative to the robot is unknown. That’s where Zivid 3D vision comes into play.

../../../_images/hand-eye-robot-robot-to-object-with-camera.png

Now, let’s assume that the pose of the object relative to the robot is unknown. That’s where Zivid 3D vision comes into play.

../../../_images/hand-eye-robot-robot-to-object-with-camera-on-arm.png

Zivid point clouds are given relative to the Zivid camera’s coordinate system. The origin in this coordinate system is fixed at the middle of the Zivid imager lens (internal 2D camera). A machine vision software can run detection and localization algorithms on this collection of data points. It can determine the pose of the object in the Zivid camera’s coordinate system (\(H^{CAM}_{OBJ}\)).

../../../_images/hand-eye-full-circle-system.png

Zivid camera can now see the object in its field of view, but relative to its own coordinate system.


To enable the robot to pick the object it is necessary to transform the object’s coordinates from the camera coordinate system to the robot base coordinate system.

../../../_images/hand-eye-robot-robot-to-object-and-camera-to-object.png

The coordinate transformation that enables this is:

  • \(H^{ROB}_{CAM}\) - pose of the camera relative to the robot base

This transformation is constant and the result of hand-eye calibration.


Once the pose circle is closed, it is possible to calculate one pose from the other poses in the circle. In this case, the pose of the object relative to the robot. This is found by post-multiplying the pose of the camera relative to the robot, with the pose of the object relative to the camera:

\[H^{ROB}_{OBJ}=H^{ROB}_{CAM} \cdot H^{CAM}_{OBJ}\]
../../../_images/hand-eye-eye-to-hand-all-poses.png

Zivid camera can now see the object in its field of view, but relative to its own coordinate system.


To enable the robot to pick the object it is necessary to transform the object’s coordinates from the camera coordinate system to the robot base coordinate system.

../../../_images/hand-eye-robot-robot-to-object-and-camera-to-object-on-arm.png

The coordinate transformations that enable this are:

  • \(H^{EE}_{CAM}\) - pose of the camera relative to the end-effector

  • \(H^{ROB}_{EE}\) - pose of the end-effector relative to the robot base

The former is constant and the result of hand-eye calibration, while the latter known and provided by the robot controller.


Once the pose circle is closed, it is possible to calculate one pose from the other poses in the circle. In this case, the pose of the object relative to the robot:

\[H^{ROB}_{OBJ}=H^{ROB}_{EE} \cdot H^{EE}_{CAM} \cdot H^{CAM}_{OBJ}\]
../../../_images/hand-eye-eye-in-hand-all-poses.png

Now that we’ve defined the hand-eye calibration problem, let’s see the Hand-Eye Calibration Solution.