How To Get Good Quality Data On Zivid Calibration Object
This tutorial presents how to acquire good quality point clouds of the calibration object for hand-eye calibration. This is a crucial step to get the hand-eye calibration algorithm to work as well as to achieve the desired accuracy. The goal is to configure camera settings that provide high-quality point clouds regardless of where the calibration object is located in the FOV. While the tutorial provides point cloud examples for the Zivid calibration board, the same principles can be applied to ArUco markers.
It is assumed that you have already specified the robot poses at which you want to take point clouds of the calibration object. In the next article, you can learn how to select appropriate poses for hand-eye calibration.
참고
To calibrate using the Zivid calibration board, ensure that the entire board, including the ArUco marker, is fully visible for each pose.
참고
To calibrate using ArUco markers, ensure that at least one of the markers is fully visible for each pose. Better results can be expected if more markers are visible but this is not necessary.
We will discuss two poses: the ‘near’ pose and the ‘far’ pose. The ‘near’ pose refers to the robot position where the imaging distance between the camera and the calibration object is minimized. In eye-in-hand systems, this is when the robot-mounted camera is closest to the calibration object. In eye-to-hand systems, it is when the robot positions the calibration object closest to the stationary camera.
Eye-in-hand robot pose for a close capture of Zivid calibration board |
Eye-to-hand robot pose for a close capture of Zivid calibration board |
The ‘far’ pose refers to the robot position with the greatest imaging distance between the camera and the calibration object. In eye-in-hand systems, this is when the robot-mounted camera is farthest away from the calibration object. In eye-to-hand systems, it is when the robot positions the calibration object farthest away from the stationary camera.
Eye-in-hand robot pose for a close capture of Zivid calibration board |
Eye-to-hand robot pose for the farthest capture of Zivid calibration board |
팁
If you are using the Zivid calibration board, try Calibration Board presets
for the nearest and farthest away pose.
If you get good point cloud quality, you can skip the rest of the tutorial and use these settings.
Note that these preset settings will NOT work well for ArUco markers.
예상 결과는 다음과 같습니다.
팁
If you are using the ArUco markers, try ArUco Marker presets
for the nearest and farthest away pose.
If you get good point cloud quality, you can skip the rest of the tutorial and use these settings.
핸드-아이 칼리브레이션을 위한 좋은 포인트 클라우드를 획득하기 위한 단계별 프로세스는 다음과 같습니다.
Possible scenarios
다음 튜토리얼에서는 SNR 맵을 사용하여 흑백 픽셀의 신호 품질을 확인합니다. SNR 값이 증가함에 따라 색상 표시기가 빨간색에서 진한 파란색으로 바뀝니다. 아래에서 이 튜토리얼에서 접할 수 있는 SNR 척도와 가능한 모든 사례를 찾을 수 있습니다.
The black areas are underexposed, and the quality of the white areas is not good. |
The quality of the black areas is not good and the white areas have a satisfactory quality. |
The quality of the black areas is satisfactory and the white areas have an optimal quality. |
The quality of the black areas is optimal and the white areas are overexposed. |
The black and white areas are overexposed. |
The black areas are underexposed, and the quality of the white areas is not good. |
The quality of the black areas is not good and the white areas have a satisfactory quality. |
The quality of the black areas is satisfactory and the white areas have an optimal quality. |
The quality of the black areas is optimal and the white areas are overexposed. |
The black and white areas are overexposed. |
Base settings
First, we will define the base settings for this tutorial.
로봇을 ‘가까운’ 포즈로 이동합니다.
Zivid Studio를 싱행하고 카메라 연결합니다.
Set Vision Engine to the one that you are using in your application; if using the Presets, check the Vision Engine your Presets use.
Sampling, Color 을 rgb로 설정합니다.
Set Sampling, Pixel to the one that you are using for your application; if using the Presets, check the Sampling your Presets use.
Exposure Time 를 50Hz 그리드 주파수의 경우 10000μs, 60Hz 그리드 주파수의 경우 8333μs로 설정합니다.
depth of focus calculator 를 사용하여 f-number 을 설정합니다.
최소 Depth-of-Focus (mm): 가장 먼 작동 거리 - 가장 가까운 작동 거리
Closest working distance (mm): the closest distance between the camera and the calibration object
Farthest working distance (mm): the farthest distance between the camera and the calibration object
허용 가능한 흐림 반경 (픽셀): 1
Projector Brightness 를 최대로 설정합니다.
Gain 을 1로 설정합니다.
Noise Filter 를 5로 설정합니다.
Outlier Filter 를 10으로 설정합니다.
Reflection Filter 를 global으로 설정합니다.
다른 모든 필터를 끄고 다른 모든 설정을 기본값으로 둡니다.
Optimizing camera settings for the ‘near’ pose
Fine-tuning for ‘near’ White (Acquisition 1)
At this stage, ignore the black surfaces and focus on getting good data on the white surfaces. Capture and analyze the white regions of your calibration object. Use the following images as an example when Fine-tuning for white regions.
The quality of the white areas is optimal, the black areas can be ignored (SNR map). |
The quality of the white areas is optimal, the black areas can be ignored (Depth map). |
The quality of the white areas is optimal, the black areas can be ignored (SNR map). |
The quality of the white areas is optimal, the black areas can be ignored (Depth map). |
이미지가 노출 부족입니다(흰색 픽셀이 너무 어둡습니다).
Exposure Time 를 증가시킵니다.
Increase the exposure time by increments of 10000μs [50Hz] or 8333μs [60Hz] until there is good data on the white regions.
f-number 를 감소시킵니다.
이미지가 과다 노출되었습니다(흰색 픽셀이 포화되었습니다).
Exposure Time 를 감소시킵니다.
Reducing the exposure time can lead to the appearance of waves on the point cloud due to interference from ambient light (from the power grid). If there are no waves, keep reducing the exposure time until you have good data on the white regions.
노출 시간 제한에 도달하고 데이터가 충분하지 않거나 포인트 클라우드가 물결 모양이면 다음 옵션을 따르십시오.
f-number 를 증가시킵니다.
Projector Brightness 를 감소시킵니다.
이 시점에서 네 가지 획득 중 하나(“Acquisition 1”)가 조정됩니다.
Fine-tuning for ‘near’ Black (Acquisition 2)
Turn off the “Acquisition 1” and clone it. This “Acquisition 2” needs to be tuned to have good data on the black part of your calibration object. Therefore, it’s expected to overexpose the white regions. Look at the following image as an example, where we have no data on the white part of the checkerboard due to overexposed white pixels:
White pixels overexposed and good data on the black pixels (SNR map). |
White pixels overexposed and good data on the black pixels (Depth map). |
White pixels overexposed and good data on the black pixels (SNR map). |
White pixels overexposed and good data on the black pixels (Depth map). |
어두운 표면은 더 높은 빛 노출이 필요합니다.
이미지가 노출 부족입니다(검은색 픽셀이 너무 어둡습니다).
Projector Brightness 를 증가시킵니다.
Exposure Time 를 증가시킵니다.
Increase the exposure time by increments of 10000μs [50Hz] or 8333μs [60Hz] until there is good data on the black regions of your calibration object. If the limit has been reached and the data is not yet good enough, follow the next option.
f-number 를 감소시킵니다.
Gain 를 증가시킵니다.
Optimizing camera settings for the ‘far’ pose
Fine tuning for ‘far’ White (Acquisition 3)
Turn off the “Acquisition 2” and clone the “Acquisition 1”. Again, let’s ignore the black surfaces and just focus on the white ones. Capture and analyze the white regions of the calibration object.
이미지가 노출 부족입니다(흰색 픽셀이 너무 어둡습니다).
Projector Brightness 를 증가시킵니다.
Exposure Time 를 증가시킵니다.
Increase the exposure time by increments of 10000μs [50Hz] or 8333μs [60Hz] until there is good data on the black regions of your calibration object. If the limit has been reached and the data is not yet good enough, follow the next option.
f-number 를 감소시킵니다.
이미지가 과다 노출되었습니다(흰색 픽셀이 포화되었습니다).
Exposure Time 를 감소시킵니다.
Reducing the exposure time can lead to the appearance of waves on the point cloud due to interference from ambient light (from the power grid). If there are no waves, keep reducing the exposure time until you have good data on the white regions.
노출 시간 제한에 도달하고 데이터가 충분하지 않거나 포인트 클라우드가 물결 모양이면 다음 옵션을 따르십시오.
f-number 를 증가시킵니다.
Projector Brightness 를 감소시킵니다.
Fine-tuning for ‘far’ Black (Acquisition 4)
“Acquisition 3”을 끄고 “Acquisition 2”를 복제합니다.
This “Acquisition 4” needs to be tuned to have good data on the black part of the calibration object. Therefore, it’s expected to overexpose the white regions.
어두운 표면은 더 높은 빛 노출이 필요합니다.
이미지가 노출 부족입니다(검은색 픽셀이 너무 어둡습니다).
Projector Brightness 를 증가시킵니다.
Exposure Time 를 증가시킵니다.
Increase the exposure time by increments of 10000μs [50Hz] or 8333μs [60Hz] until there is good data on the black regions of your calibration object. If the limit has been reached and the data is not yet good enough, follow the next option.
f-number 를 감소시킵니다.
Gain 를 증가시킵니다.
At this stage, the settings for the four acquisitions are configured. The missing step is configuring the final filters.
Optimizing filters
Gaussian Smoothing 를 5로 설정합니다.
Contrast Distortion, Correction 를 0.4로 설정합니다.
Contrast Distortion, Removal 를 0.5로 설정합니다.
Gaussian Smoothing 를 5로 설정합니다.
Contrast Distortion, Correction 를 0.4로 설정합니다.
Keep Contrast Distortion, Removal off.
Your point cloud should be similar to the point cloud shown at the beginning of the tutorial.
참고
In most cases, two acquisitions, one optimized for ‘near’ and the other for ‘far’ poses, will provide good quality data on the calibration object. Another option for determining the correct imaging settings is to utilize the Capture Assistant. However, it currently only offers optimal settings for the entire scene, rather than specifically for the checkerboard. While the Capture Assistant can still be effective for this purpose, it is recommended to use the method described above, as it consistently yields good results.
Let’s now see how to realize the Hand-Eye Calibration Process.