Cautions And Recommendations

Common mistakes

Image quality

  • Ensure the calibration object is well exposed and in focus in all images.

  • Keep the object within the depth of field:

    • Use higher \(f\)-number (smaller aperture).

    • Compensate with longer exposure times.

  • Use the same aperture setting for all images to avoid fluctuations caused by varying aperture sizes.

  • Capture calibration object from diverse views by exciting all six degrees of freedom of both the calibration object and the robot.

Zivid API

  • Zivid point clouds are given in millimeters (mm).

  • Therefore, ensure that robot input poses for Hand-Eye calibration also use mm for translation.

  • As a result, the Hand-Eye output translation values will be in millimeters (mm).

Detectable ArUco marker

  • If using a Zivid calibration board, ensure the ArUco marker is visible in all poses; otherwise, checkerboard detection will fail.

  • If using one or more ArUco markers as calibration object(s) not all markers need to be visible in every pose.

The detection API provides information about why the detection failed. The following shows an example of how to check the detection status:

Go to source

source

const auto detectionResult = Zivid::Calibration::detectCalibrationBoard(frame);

if(detectionResult.valid())
{
    std::cout << "Calibration board detected " << std::endl;
    handEyeInput.emplace_back(robotPose, detectionResult);
    currentPoseId++;
}
else
{
    std::cout << "Failed to detect calibration board. " << detectionResult.statusDescription() << std::endl;
}
Go to source

source

using (var frame = camera.Capture2D3D(settings))
{
    var detectionResult = Detector.DetectCalibrationBoard(frame);

    if (detectionResult.Valid())
    {
        Console.WriteLine("Calibration board detected");
        handEyeInput.Add(new HandEyeInput(robotPose, detectionResult));
        ++currentPoseId;
    }
    else
    {
        Console.WriteLine("Failed to detect calibration board, ensure that the entire board is in the view of the camera");
    }
}
Go to source

source

detection_result = zivid.calibration.detect_calibration_board(frame)

if detection_result.valid():
    print("Calibration board detected")
    hand_eye_input.append(zivid.calibration.HandEyeInput(robot_pose, detection_result))
    current_pose_id += 1
else:
    print(f"Failed to detect calibration board. {detection_result.status_description()}")

Eye-to-hand

  • The calibration object is mounted on the robot end-effector and moves with the robot.

  • Mounting can be fixed directly to the flange or held by a gripper.

  • Exact mounting location is not critical (it is not necessary to know its pose relative to the end-effector).

Recommendations:

  • Ensure the calibration object does not move relative to the flange or gripper during robot motion.

  • Use rigid materials for brackets and calibration object (e.g., use Zivid On-Arm Mount to mount the Zivid calibration board to the robot).

  • Ensure smooth robot motion with controlled accelerations to avoid shaking or dislocation.

  • Ensure the calibration object is still during image acquisition (e.g., wait a few seconds after robot motion to allow vibrations to settle before capturing).

Eye-in-hand

  • The calibration object is stationary in the robot’s workspace.

  • The object’s exact location is not critical (it is not necessary to know its pose relative to the robot’s base).

  • The camera (mounted on the end-effector) must be able to see the calibration object from multiple perspectives.

Recommendations:

Environment conditions

Keep the camera temperature stable during calibration:

  • Ensure Thermal Stabilization is enabled to minimize temperature-related performance variations.

  • Perform a Warm-up procedure before starting.

  • Keep calibration conditions similar to the expected operating conditions.

Accuracy and recalibration

Picking accuracy depends on:

  • Camera calibration

  • Hand-eye calibration

  • Machine vision software

  • Robot positioning

Robots are repeatable but not inherently accurate due to the factors such as temperature, joint friction, payload, manufacturing tolerances.

Recommendations:

  • Perform mastering (zeroing) to teach the robot the exact zero position of each joint.

  • Calibrate the robot for improved pose accuracy.

  • Repeat hand-eye calibration if:

    • robot loses calibration.

    • camera is dismounted and remounted.

  • Perform regular recalibration (robot and/or hand-eye) to maintain system performance.

Choosing the correct method

  • Avoid methods that compute or require Camera Intrinsics during hand-eye calibration:

    • Zivid cameras are already calibrated with a complex proprietary model.

    • Zivid SDK provides approximations for OpenCV and Halcon models, but these lose accuracy (unlike the proprietary model).

Recommendations:

  • Use Zivid Hand-Eye Calibration, which leverages 3D point cloud data for extrinsic calculation.

  • Alternative methods based on CAD matching are also ok (they also calculate camera extrinsics from the point cloud); however, they may lack Zivid-specific optimizations.

  • Additional advantages of Zivid method is support for ArUco markers, which can be permanently installed in the robot cell (on-arm cameras) or on the robot (stationary cameras).

Detailed explanation

Some third party hand-eye calibration methods compute camera intrinsic parameters along with extrinsic parameters and the relative pose between the camera and the robot frame. We do not recommend these approaches because they treat a Zivid camera as an uncalibrated 2D camera, rather than a well-calibrated 3D camera.

Each Zivid camera unit goes through an extensive calibration process, which includes determining the intrinsic parameters of its 2D color camera. Our calibration uses a complex camera model with more intrinsic parameters than some well-known pinhole camera models, e.g., OpenCV camera model. Since Zivid camera model is proprietary, our internal camera intrinsics are not available in the SDK. However, Zivid SDK does offer approximations of OpenCV and Halcon models (see Camera Intrinsics) from our camera model. Because information is lost in approximation, using hand-eye calibration methods that utilize OpenCV or Halcon intrinsics is not the best approach either.

Since Zivid cameras provide 3D data, it is possible to calculate camera extrinsics from the point cloud. Zivid Hand-Eye Calibration method utilizes this benefit and that is why it is the recommended approach and the one that works best with our cameras. There are alternative, non-Zivid methods, that utilize the possibility to calculate camera extrinsics from the point cloud. These methods rely purely on point cloud data and an example is hand-eye calibration based on CAD matching.

Another reason to use Zivid Hand-Eye Calibration is the support for ArUco markers, which are relatively small and can thus be permanently installed in the robot cell. They are especially handy for stationary mounted cameras (eye-to-hand) as they can be installed between the robot flange and the gripper. On the other hand, for robot mounted cameras (eye-in-hand) the markers can be permanently installed somewhere in the robot’s workspace.

Read more about Hand-Eye Calibration Residuals.

Version History

SDK

Changes

2.15.0

Information about why the detection failed added to detection API.