Camera Intrinsics

Zivid 카메라 모델은 잘 알려진 일부 핀홀 카메라 모델(예: OpenCV 카메라 모델)보다 더 많은 Intrinsics 파라미터를 사용하여 더 복잡합니다. 또한 카메라는 롤링 보정(rolling calibration)을 사용합니다. 즉, 조리개, 온도 및 색상 채널의 기능으로 포인트 클라우드가 생성됩니다.

3D-to-2D Mapping in Zivid Cameras

Every camera has a physical relationship between 3D points in space and their corresponding 2D pixel indices. For Zivid cameras:

  • Same Resolution: When the 2D and 3D data have the same resolution, there is a direct 1:1 mapping between the two.

  • Different Resolutions: If the 2D and 3D resolutions differ, the mapping depends on the following settings:

    • Settings2D::Sampling::Pixel

    • Zivid::Settings::Sampling::Pixel

To maintain a 1:1 correspondence between 2D and 3D data, you can resample the 3D point cloud using Settings::Processing::Resampling setting. For more information, see Resampling.

Alternatively, if resampling is not performed, you can use the pixelMapping(camera, settings) function to get correct point-to-pixel mapping for any sampling configuration in 3D and full resolution 2D..

조심

We advise against using camera intrinsics for the following reasons:

  • Loss of Information: Approximations can result in a loss of valuable information.

  • Better Alternatives: More effective methods are often available to achieve similar or better results.

We recommend using the point cloud data directly and pixel mapping instead of intrinsics.

Zivid camera model is proprietary, which is why our internal camera intrinsics are not available in the SDK. However, since many machine vision algorithms rely on standard models, Zivid provides approximations of the OpenCV and Halcon intrinsics models for compatibility.

OpenCV camera intrinsics

Zivid SDK는 OpenCV 카메라 Intrinsics 기능을 얻을 수 있는 몇 가지 옵션을 제공합니다.

Function name

Resolution the returned intrinsics correspond to

intrinsics(camera)

Default value of Zivid::Settings::Sampling::Pixel for the camera

intrinsics(camera, settings)

Combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling in settings

intrinsics(camera, settings_2d)

Value of Zivid::Settings2D::Sampling::Pixel in settings_2d

estimateIntrinsics(frame)

Combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling used to capture the frame

We recommend estimating the intrinsics from the point cloud to get the most accurate results. The hard-coded camera intrinsics are given for a single aperture and a single temperature. Therefore, it will not be as accurate as the estimated intrinsics. See the list below for values for each camera model.

Camera Model

Lens Temperature (°C)

Aperture

Zivid 2+

35

2.68

Zivid 2

35

2.30

The camera intrinsics are estimated from the point cloud, taking into account the temperature and aperture used during the capture.

예상 OpenCV Intrinsics를 얻으려면 먼저 카메라에 연결해야 합니다.

소스로 이동

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
소스로 이동

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
소스로 이동

source

print("Connecting to camera")
camera = app.connect_camera()

Call capture function to get the frame:

소스로 이동

source

const auto frame = camera.capture(settings);
소스로 이동

source

var frame = camera.Capture(settings);
소스로 이동

source

with camera.capture(settings=settings) as frame:

Then, you can estimate the intrinsics from the frame:

소스로 이동

source

const auto estimated_intrinsics = Zivid::Experimental::Calibration::estimateIntrinsics(frame);
소스로 이동

source

var estimatedIntrinsics = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frame);
소스로 이동

source

estimated_intrinsics = zivid.experimental.calibration.estimate_intrinsics(frame)

The estimate intrinsics function takes sampling strategies into account. For example, if you set Zivid::Settings::Sampling::Pixel to by2x2 and Zivid::Settings::Resampling to disiabled, you get correct (subsampled) intrinsics:

소스로 이동

source

const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto frame = camera.capture(settingsSubsampled);
const auto estimatedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::estimateIntrinsics(frame);
소스로 이동

source

var settingsSubsampled = SubsampledSettingsForCamera(camera);
var frameSubsampled = camera.Capture(settingsSubsampled);
var estimatedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frameSubsampled);
소스로 이동

source

settings_subsampled = _subsampled_settings_for_camera(camera)
frame = camera.capture(settings_subsampled)
estimated_intrinsics_for_subsampled_settings = zivid.experimental.calibration.estimate_intrinsics(frame)

Hard-coded intrinsics are given for a single aperture and temperature. To get the hard-coded camera intrinsics, first connect to the camera:

소스로 이동

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
소스로 이동

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
소스로 이동

source

print("Connecting to camera")
camera = app.connect_camera()

Then, you can get the default OpenCV camera intrinsics:

소스로 이동

source

std::cout << "Getting camera intrinsics" << std::endl;
const auto intrinsics = Zivid::Experimental::Calibration::intrinsics(camera);
소스로 이동

source

Console.WriteLine("Getting camera intrinsics");
var intrinsics = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera);
소스로 이동

source

print("Getting camera intrinsics")
intrinsics = zivid.experimental.calibration.intrinsics(camera)

If you are sampling, you can get the OpenCV camera intrinsics corresponding to the combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling used in settings:

소스로 이동

source

const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto fixedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::intrinsics(camera, settingsSubsampled);
소스로 이동

source

var settingsSubsampled = SubsampledSettingsForCamera(camera);
var fixedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settingsSubsampled);
소스로 이동

source

settings_subsampled = _subsampled_settings_for_camera(camera)
fixed_intrinsics_for_subsampled_settings = zivid.experimental.calibration.intrinsics(camera, settings_subsampled)

For hard-coded intrinsics for 2D settings:

소스로 이동

source

const auto settings2D =
    Zivid::Settings2D{ Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{} } };
const auto fixedIntrinsicsForSettings2D = Zivid::Experimental::Calibration::intrinsics(camera, settings2D);
소스로 이동

source

var settings2D = new Zivid.NET.Settings2D
{
    Acquisitions = { new Zivid.NET.Settings2D.Acquisition { } }
};
var fixedIntrinsicsForSettings2D = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settings2D);
소스로 이동

source

settings_2d = zivid.Settings2D()
settings_2d.acquisitions.append(zivid.Settings2D.Acquisition())
fixed_intrinsics_for_settings_2d = zivid.experimental.calibration.intrinsics(camera, settings_2d)

조심

Hard-coded OpenCV intrinsics are fixed and do not adapt to changes in temperature or aperture, unlike estimateIntrinsics. This means they may not match captures taken under different conditions.

Ambient temperature variations affect the camera’s internal temperature. While Thermal Stabilization helps, the lens temperature can still differ significantly between the start of deployment and after stabilization. This temperature difference can cause discrepancies between hard-coded intrinsics and point cloud data. Aperture variations, especially in HDR mode where each acquisition uses a different aperture, further complicate the accuracy of hard-coded intrinsics.

Due to these complexities, we recommend estimating the camera intrinsics from the point cloud for more accurate results.

Saving camera intrinsics

You can save OpenCV camera intrinsics to a YML file using the following code:

소스로 이동

source

const auto outputFile = "Intrinsics.yml";
std::cout << "Saving camera intrinsics to file: " << outputFile << std::endl;
intrinsics.save(outputFile);
소스로 이동

source

var intrinsicsFile = "Intrinsics.yml";
Console.WriteLine("Saving camera intrinsics to file: " + intrinsicsFile);
intrinsics.Save(intrinsicsFile);
소스로 이동

source

output_file = "Intrinsics.yml"
print(f"Saving camera intrinsics to file: {output_file}")
intrinsics.save(output_file)

Halcon camera intrinsics

Halcon은 Zivid 및 OpenCV 카메라 모델과 다른 카메라 모델을 사용합니다. Halcon Intrinsics를 가져오는 두 가지 옵션이 있으며 둘 다 OpenCV 모델의 근사값을 기반으로 합니다.

조심

If you need to use Halcon intrinsics (camera internal parameters), use one of the approaches described below to get them. Do not calibrate our 2D camera in Halcon to get Halcon intrinsics; it does not work well with our camera.

요구 사항:

  • Python 설치 및 Python 스크립트 실행

  • Zivid-Python 설치됨

카메라용 Halcon Intrinsics를 얻는 가장 간단한 방법은 convert_intrinsics_opencv_to_halcon.py 코드 샘플을 실행하는 것입니다. example when reading from camera 를 중심으로 샘플 설명을 읽어보십시오.

참고

이 방법은 하드 코딩된 카메라 Intrinsics를 가져오는 것으로 제한됩니다.

요구 사항:

  • Python 설치 및 Python 스크립트 실행

  • C++ 또는 C# 코드 샘플 빌드

카메라용 Halcon Intrinsics를 가져오는 다른 방법은 YML 파일에서 OpenCV 카메라 Intrinsics를 로드하고 Halcon 형식으로 변환하는 것입니다.

OpenCV 카메라 Intrinsics을 가져오고 파일에 저장하려면 다음 샘플 중 하나를 실행합니다.

OpenCV 카메라 Intrinsics를 가져와 파일에 저장하려면 GetCameraIntrinsics.cpp 실행합니다. Configure C++ Samples With CMake and Build Them in Visual Studio in Windows 을 참고하십시오.

OpenCV 카메라 Intrinsics를 가져와서 파일에 저장하려면 GetCameraIntrinsics.cs 실행합니다. C# 샘플 Build C# Samples using Visual Studio 을 참고하십시오.

OpenCV 카메라 Intrinsics를 가져와서 파일에 저장하려면 get_camera_intrinsics.py 실행합니다.

그런 다음 convert_intrinsics_opencv_to_halcon.py 코드 샘플을 실행합니다. Example when reading from camera 를 중심으로 샘플 설명을 읽어보십시오.

참고

이 방법을 사용하면 하드코딩된 카메라 Intrinsics와 포인트 클라우드에서 추정된 카메라 Intrinsics를 모두 가져올 수 있습니다. 그러나 후자의 경우 Intrinsics이 필요한 각 캡처에 대해 파일에서 저장 및 로드한다고 가정합니다.

Which camera intrinsics should I use?

In general, we recommend using the actual point cloud data and pixelMapping(camera, settings) function rather than intrinsics. If absolutely necessary, see the following guidelines for choosing the correct camera intrinsics.

Hard-coded camera intrinsics

We recommend using hard-coded camera intrinsics only in the following scenario:

  • One of the following cases:

    • You get the color image from a single 2D acquisition using capture2D()

    • You get the color image from a single 2D acquisition using capture2D() and and the point cloud from a single 3D acquisition using capture3D()

    • You get the color image from a single 2D acquisition and the point cloud from a single 3D acquisition using capture2D3D()

  • For apertures (used for both 2D and 3D) similar to the hard-coded one (see the table).

  • 실내 온도에 가까운 온도용.

  • 2D 데이터만 사용하는 애플리케이션의 경우(예: 직선을 감지하기 위해 이미지 왜곡 제거).

포인트 클라우드에서 추정된 카메라 Intrinsics

We recommend using estimated intrinsics in almost any scenario:

  • Regardless how you get the color image and the point cloud. This is because the point cloud is likely a result of acquisitions with different apertures and at a temperature different than the one corresponding to the hard-coded intrinsics.

  • Any use case when using OpenCV intrinsics is necessary, for example:

    • projectPoints()를 사용하여 3D 데이터를 2D 이미지로 투영합니다.

    • solvePnP()를 사용하여 2D 이미지에서 3D 포즈를 추정합니다.

참고

The estimated camera intrinsics also work well for all cases where we recommend the hard-coded camera intrinsics. However, estimating intrinsics requires some computation time, whereas getting the hard-coded intrinsics from the camera is instantaneous.

2D and 3D capture with different resolution

We recommend using pixel mapping and the point cloud data to go from 2D to 3D. This will handle the resolution difference correctly.

Another approach is to match the resolution of the 2D data to the 3D data by downsample or subsample the 2D image to match the 3D resolution. See more information at Sampling (3D), Sampling (2D), and Resampling.

If you still need to use intrinsics, we recommend using estimateIntrinsics(frame).

Hand-Eye Calibration

We recommend our own Hand-Eye Calibration calibration, as it is best suited for Zivid cameras. Our hand-eye calibration method does not require camera intrinsics; however, many other methods do. If you use one of the other methods, we recommend using estimateIntrinsics() from the point cloud. Read more about choosing the correct Hand-Eye calibration method.

Projecting 3D points to a 2D image plane

We recommend using estimateIntrinsics when projecting Zivid point clouds onto a 2D image plane.

This will result in smaller re-projection errors than using hard-coded intrinsics. However, keep in mind that using our correction filters, e.g. Gaussian Smoothing and Contrast Distortion Filter will increase re-projection errors. When using estimated intrinsics on a point cloud captured without correction filters, less than 1 pixel of re-projection error is expected. When using correction filters, the re-projection errors will be larger for the significantly corrected points. However, for most points, there should be less than 1 pixel re-projection error.

Downsampled or Subsampled 2D

When you sub-/downsample the 2D data you can regain the direct 1-to-1 mapping between 2D and 3D, see Sampling (3D).

When downsampling a full-resolution 2D image it is important to understand which pixels are used to generate the downsampled pixels. If you use pixels symmetrically around the wanted pixel, then the averaged pixel should land on the pixel location that corresponds to the 3D data. This corresponds best with intrinsics that the SDK provides.

For more information see:

Estimating 3D pose from a 2D image

When projecting 2D pixel into 3D space, we recommend using estimateIntrinsics from the point cloud.

Compared to hard-coded camera intrinsics, the intrinsics estimated from the point cloud will provide better results. However, we rather recommend estimating the pose directly from the point cloud as this will be more accurate.

2D image processing algorithms

어플리케이션에 2D 데이터만 사용하는 경우 하드 코딩된 Intrinsics 및 포인트 클라우드에서 추정된 Intrinsics 데이터가 제대로 작동합니다. 예를 들어 2D 이미지의 왜곡을 제거하여 렌즈 왜곡을 보정하여 직선을 감지할 수 있습니다.

Version History

SDK

Changes

2.10.0

Monochrome Capture 사용하기 전에 Intrinsics 기능을 수정해야 합니다.