UR5 Robot + Python: Generated Dataset And Perform Hand-Eye Calibration

Note

This tutorial is built and tested using Universal Robots UR5e.

Introduction

Precise hand-eye calibration requires a good dataset, but it is not trivial how to generate this dataset. In this tutorial, we first set up communication between Zivid camera and Universal Robots (UR) UR5e to extract a dataset consisting of 3D images and robot poses. Then, we use this dataset to do hand-eye calibration. The sample works for both eye-in-hand and eye-to-hand.

If you are not familiar with hand-eye calibration, we recommend going through Hand-Eye Calibration.

Requirements

Prepare UR5e robot

To generate a dataset, we want to follow the steps explained in Hand-Eye Calibration Process.

Clone zivid-python-samples from Github, navigate to ur_hand_eye_calibration folder and install the runtime requirements.

pip install -r requirements.txt

This installs all 3rd party libraries including the UR RTDE (Real-Time Data Exchange) communication protocol.

The three files we need for this sample are:

  • universal_robots_communication_file.xml, configuration file with I/O registers.

  • universal_robots_robot_motion_script.urp, robot motion script created and run in UR controller.

  • universal_robots_perform_hand_eye_calibration.py, python script for capturing images and poses, saving them and perform hand-eye calibration.

The robot motion script is where we set all robot postures used in the calibration. The python script captures images, reads robot poses, and performs hand-eye calibration. Communication between robot and computer is done through UR RTDE (Real-Time Data Exchange) protocol. An overview of the communication flow is illustrated below.

../../../../_images/hand-eye-ur5-python-flow-diagram.png

Small adjustments must be done to the UR robot motion script before we can generate a dataset.

Modify universal_robots_robot_motion_script.urp robot motion script

We only need to modify the UR robot motion script. First, save the universal_robots_hand_eye_script.urp on a USB drive and load the script on your UR controller. Your controller interface should look something like this:

../../../../_images/hand-eye-ur5-script-screenshot.png

Here, capture_hand_eye is a subprogram that triggers the camera to capture one image for every robot pose. The subprogram can communicate with the PC via registers. These registers are logical channels on the communication interface. The following are I/O variables are used to communicate with the python script:

  • image_count

  • start_capture

  • camera_ready

  • finish_capture

output_int_register_24 = image_count:

Count the number of images and poses acquired.

output_bit_register_64 = start_capture:

Bool that trigger camera to capture an image.

input_bit_register_64 = finish_capture:

Bool that triggers UR robot to move to the next position.

input_bit_register_65 = ready_to_capture::

Bool that tells UR robot that the camera is ready to be used.

The universal_robots_communication_file.xml states which registers we are writing to. If we change a register in the python script, we must also change the same register in the configuration file.

<?xml version="1.0"?>
<rtde_config>
    <recipe key="out">
        <field name="actual_TCP_pose" type="VECTOR6D"/>
        <field name="output_int_register_24" type="INT32"/>
        <field name="output_bit_register_64" type="BOOL"/>
    </recipe>
    <recipe key="in">
        <field name="input_bit_register_64" type="BOOL"/>
        <field name="input_bit_register_65" type="BOOL"/>
    </recipe>
</rtde_config>

Every waypoint, robot_pose_X, must be modified to fit the setup. The easiest way is to set the robot in free-drive and move it with your hands to desired postures. Make sure that the Zivid checkerboard is visible for the camera at each of these postures. If the camera is not able to expose the checkerboard, modify camera settings in function _set_camera_settings.

def _set_camera_settings(cam: zivid.Camera):
    """Set camera settings

    Args:
        cam: Zivid camera
    """
    with cam.update_settings() as updater:
        updater.settings.iris = 20
        updater.settings.exposure_time = datetime.timedelta(microseconds=10000)
        updater.settings.filters.reflection.enabled = True
        updater.settings.brightness = 1.0
        updater.settings.filters.gaussian.enabled = 1
        updater.settings.gain = 1

How To Get Good Quality Data On Zivid Calibration Board explains how we can tune our parameters for the optimal 3D image quality.

Caution

If the captured images of the Zivid checkerboard are badly exposed, hand-eye calibration will not work!

You can add or remove waypoints, just remember to call capture_hand_eye after every waypoint. We recommend having 10-20 to get an accurate hand-eye transformation.

Before we run the python script, it is smart to manually go through all the robot positions. This way we can verify that the robot does not end up in a singularity while moving from one position to the next.

Caution

If you do eye-in-hand calibration, ensure that the Zivid camera does not collide with the robot.

Run python script

Now that we have modified the robot script to fit your scene, we are ready to generate a dataset and perform calibration!

Tip

It is recommend to Warm-up the camera and run Infield Correction before running hand-eye calibration. Use the same capture cycle during warmup, infield correction, and hand-eye calibration as in your application.

First, run universal_robots_robot_motion_script.urp from the UR controller. When the program starts, it will wait for the python script to set the camera_ready variable to True. Then, run the universal_robots_perform_hand_eye_calibration.py script from the command line.

For Eye-in-hand:

python universal_robots_perform_hand_eye_calibration.py --eih --ip 192.168.0.123

For Eye-to-hand:

python universal_robots_perform_hand_eye_calibration.py --eth --ip 192.168.0.123

Replace 192.168.0.123 with the robot’s actual IP address.

The different inputs required are displayed by typing the following command in the command prompt.

python universal_robots_perform_hand_eye_calibration.py -h
../../../../_images/hand-eye-full-hand-eye-sample-screenshot.png

When the script is finished, it outputs the following:

  • the directory we saved our data in

  • the 4x4 calibration matrix

  • the residual in rotation and translation for each of the robot poses

A more detailed explanation about how Zivid calculates residuals are found in Hand-Eye Calibration Residuals.

When we have the transform matrix, we can use it to convert a 3D point from the camera frame to the robot base frame. Details about how to do this are described in How To Use The Result Of Hand-Eye Calibration.

Hint

Python and C++ samples of transforming a 3D point / point cloud are available at our GitHub repo.