相机内参

Zivid相机模型比一些广为人知的针孔相机模型(例如OpenCV相机模型)更复杂,使用了更多的内参。此外,我们的相机使用了一种滚动校准技术,这意味着点云是作为光圈、温度和颜色通道的函数生成的。

3D-to-2D Mapping in Zivid Cameras

Every camera has a physical relationship between 3D points in space and their corresponding 2D pixel indices. For Zivid cameras:

  • Same Resolution: When the 2D and 3D data have the same resolution, there is a direct 1:1 mapping between the two.

  • Different Resolutions: If the 2D and 3D resolutions differ, the mapping depends on the following settings:

    • Settings2D::Sampling::Pixel

    • Zivid::Settings::Sampling::Pixel

To maintain a 1:1 correspondence between 2D and 3D data, you can resample the 3D point cloud using Settings::Processing::Resampling setting. For more information, see Resampling(重采样).

Alternatively, if resampling is not performed, you can use the pixelMapping(camera, settings) function to get correct point-to-pixel mapping for any sampling configuration in 3D and full resolution 2D..

小心

We advise against using camera intrinsics for the following reasons:

  • Loss of Information: Approximations can result in a loss of valuable information.

  • Better Alternatives: More effective methods are often available to achieve similar or better results.

We recommend using the point cloud data directly and pixel mapping instead of intrinsics.

Zivid camera model is proprietary, which is why our internal camera intrinsics are not available in the SDK. However, since many machine vision algorithms rely on standard models, Zivid provides approximations of the OpenCV and Halcon intrinsics models for compatibility.

OpenCV相机内参

Zivid SDK 提供了一些获取 OpenCV 相机内参的方法。

Function name

Resolution the returned intrinsics correspond to

intrinsics(camera)

Default value of Zivid::Settings::Sampling::Pixel for the camera

intrinsics(camera, settings)

Combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling in settings

intrinsics(camera, settings_2d)

Value of Zivid::Settings2D::Sampling::Pixel in settings_2d

estimateIntrinsics(frame)

Combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling used to capture the frame

We recommend estimating the intrinsics from the point cloud to get the most accurate results. The hard-coded camera intrinsics are given for a single aperture and a single temperature. Therefore, it will not be as accurate as the estimated intrinsics. See the list below for values for each camera model.

相机型号

镜头温度 (°C)

光圈

Zivid 2+

35

2.68

Zivid 2

35

2.30

The camera intrinsics are estimated from the point cloud, taking into account the temperature and aperture used during the capture.

要获得估计的OpenCV内参,您首先必须连接到相机:

跳转到源码

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
跳转到源码

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
跳转到源码

source

print("Connecting to camera")
camera = app.connect_camera()

Call capture function to get the frame:

跳转到源码

source

const auto frame = camera.capture(settings);
跳转到源码

source

var frame = camera.Capture(settings);
跳转到源码

source

with camera.capture(settings=settings) as frame:

Then, you can estimate the intrinsics from the frame:

跳转到源码

source

const auto estimated_intrinsics = Zivid::Experimental::Calibration::estimateIntrinsics(frame);
跳转到源码

source

var estimatedIntrinsics = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frame);
跳转到源码

source

estimated_intrinsics = zivid.experimental.calibration.estimate_intrinsics(frame)

The estimate intrinsics function takes sampling strategies into account. For example, if you set Zivid::Settings::Sampling::Pixel to by2x2 and Zivid::Settings::Resampling to disiabled, you get correct (subsampled) intrinsics:

跳转到源码

source

const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto frame = camera.capture(settingsSubsampled);
const auto estimatedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::estimateIntrinsics(frame);
跳转到源码

source

var settingsSubsampled = SubsampledSettingsForCamera(camera);
var frameSubsampled = camera.Capture(settingsSubsampled);
var estimatedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frameSubsampled);
跳转到源码

source

settings_subsampled = _subsampled_settings_for_camera(camera)
frame = camera.capture(settings_subsampled)
estimated_intrinsics_for_subsampled_settings = zivid.experimental.calibration.estimate_intrinsics(frame)

Hard-coded intrinsics are given for a single aperture and temperature. To get the hard-coded camera intrinsics, first connect to the camera:

跳转到源码

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
跳转到源码

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
跳转到源码

source

print("Connecting to camera")
camera = app.connect_camera()

Then, you can get the default OpenCV camera intrinsics:

跳转到源码

source

std::cout << "Getting camera intrinsics" << std::endl;
const auto intrinsics = Zivid::Experimental::Calibration::intrinsics(camera);
跳转到源码

source

Console.WriteLine("Getting camera intrinsics");
var intrinsics = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera);
跳转到源码

source

print("Getting camera intrinsics")
intrinsics = zivid.experimental.calibration.intrinsics(camera)

If you are sampling, you can get the OpenCV camera intrinsics corresponding to the combination of Zivid::Settings::Sampling::Pixel and Zivid::Settings::Resampling used in settings:

跳转到源码

source

const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto fixedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::intrinsics(camera, settingsSubsampled);
跳转到源码

source

var settingsSubsampled = SubsampledSettingsForCamera(camera);
var fixedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settingsSubsampled);
跳转到源码

source

settings_subsampled = _subsampled_settings_for_camera(camera)
fixed_intrinsics_for_subsampled_settings = zivid.experimental.calibration.intrinsics(camera, settings_subsampled)

For hard-coded intrinsics for 2D settings:

跳转到源码

source

const auto settings2D =
    Zivid::Settings2D{ Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{} } };
const auto fixedIntrinsicsForSettings2D = Zivid::Experimental::Calibration::intrinsics(camera, settings2D);
跳转到源码

source

var settings2D = new Zivid.NET.Settings2D
{
    Acquisitions = { new Zivid.NET.Settings2D.Acquisition { } }
};
var fixedIntrinsicsForSettings2D = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settings2D);
跳转到源码

source

settings_2d = zivid.Settings2D()
settings_2d.acquisitions.append(zivid.Settings2D.Acquisition())
fixed_intrinsics_for_settings_2d = zivid.experimental.calibration.intrinsics(camera, settings_2d)

小心

Hard-coded OpenCV intrinsics are fixed and do not adapt to changes in temperature or aperture, unlike estimateIntrinsics. This means they may not match captures taken under different conditions.

Ambient temperature variations affect the camera’s internal temperature. While Thermal Stabilization helps, the lens temperature can still differ significantly between the start of deployment and after stabilization. This temperature difference can cause discrepancies between hard-coded intrinsics and point cloud data. Aperture variations, especially in HDR mode where each acquisition uses a different aperture, further complicate the accuracy of hard-coded intrinsics.

Due to these complexities, we recommend estimating the camera intrinsics from the point cloud for more accurate results.

保存相机内参

You can save OpenCV camera intrinsics to a YML file using the following code:

跳转到源码

source

const auto outputFile = "Intrinsics.yml";
std::cout << "Saving camera intrinsics to file: " << outputFile << std::endl;
intrinsics.save(outputFile);
跳转到源码

source

var intrinsicsFile = "Intrinsics.yml";
Console.WriteLine("Saving camera intrinsics to file: " + intrinsicsFile);
intrinsics.Save(intrinsicsFile);
跳转到源码

source

output_file = "Intrinsics.yml"
print(f"Saving camera intrinsics to file: {output_file}")
intrinsics.save(output_file)

Halcon相机内参

Halcon使用的是不同于Zivid和OpenCV的相机模型。有两种方法可以获得Halcon内参,两者都基于OpenCV模型的近似值。

小心

If you need to use Halcon intrinsics (camera internal parameters), use one of the approaches described below to get them. Do not calibrate our 2D camera in Halcon to get Halcon intrinsics; it does not work well with our camera.

要求:

  • 安装了Python和具备运行Python脚本的技能

  • 安装了 Zivid-Python

为您的相机获取Halcon内参的最简单方法是运行 convert_intrinsics_opencv_to_halcon.py 代码示例。阅读示例描述,重点关注 example when reading from camera

备注

此方法仅限于获取硬编码的相机内参。

要求:

  • 安装了Python和具备运行Python脚本的技能

  • 构建C++或C#代码示例的技能

为您的相机获取Halcon内参的另一种方法是从YML文件加载OpenCV相机内参并将它们转换为Halcon格式。

要获取OpenCV相机内参并将它们保存到文件中,请运行以下示例之一:

要获取OpenCV相机内参并将它们保存到文件中,请运行 GetCameraIntrinsics.cpp 。查看 使用 CMake 配置 C++ 示例并在 Windows 的 Visual Studio 中构建它们

要获取OpenCV相机内参并将它们保存到文件中,请运行 GetCameraIntrinsics.cs 。查看 使用Visual Studio构建C#示例 构建C#示例。

要获取OpenCV相机内参并将它们保存到文件中,请运行 get_camera_intrinsics.py

然后,运行 convert_intrinsics_opencv_to_halcon.py 代码示例。阅读示例描述,重点关注*Example when reading from file*。

备注

这种方法允许获得硬编码的相机内参和从点云估计的相机内参。但是对于后面一种方式,您需要保存和加载您需要的内参的每一次捕获。

我应该使用哪种相机内参?

In general, we recommend using the actual point cloud data and pixelMapping(camera, settings) function rather than intrinsics. If absolutely necessary, see the following guidelines for choosing the correct camera intrinsics.

硬编码相机内参

We recommend using hard-coded camera intrinsics only in the following scenario:

  • One of the following cases:

    • You get the color image from a single 2D acquisition using capture2D()

    • You get the color image from a single 2D acquisition using capture2D() and and the point cloud from a single 3D acquisition using capture3D()

    • You get the color image from a single 2D acquisition and the point cloud from a single 3D acquisition using capture2D3D()

  • For apertures (used for both 2D and 3D) similar to the hard-coded one (see the table).

  • 接近*室内*温度的环境温度。

  • 仅使用2D数据的应用,例如,使图像不失真以检测直线。

通过点云估计相机内参

We recommend using estimated intrinsics in almost any scenario:

  • Regardless how you get the color image and the point cloud. This is because the point cloud is likely a result of acquisitions with different apertures and at a temperature different than the one corresponding to the hard-coded intrinsics.

  • Any use case when using OpenCV intrinsics is necessary, for example:

    • 使用projectPoints()将3D数据投影到2D图像。

    • 使用solvePnP()从2D图像估计3D位姿。

备注

The estimated camera intrinsics also work well for all cases where we recommend the hard-coded camera intrinsics. However, estimating intrinsics requires some computation time, whereas getting the hard-coded intrinsics from the camera is instantaneous.

不同分辨率的 2D 和 3D 捕获

We recommend using pixel mapping and the point cloud data to go from 2D to 3D. This will handle the resolution difference correctly.

Another approach is to match the resolution of the 2D data to the 3D data by downsample or subsample the 2D image to match the 3D resolution. See more information at Sampling (3D), Sampling (2D), and Resampling(重采样).

If you still need to use intrinsics, we recommend using estimateIntrinsics(frame).

手眼标定

We recommend our own 手眼标定 calibration, as it is best suited for Zivid cameras. Our hand-eye calibration method does not require camera intrinsics; however, many other methods do. If you use one of the other methods, we recommend using estimateIntrinsics() from the point cloud. Read more about choosing the correct Hand-Eye calibration method.

将3D点投影到2D图像平面

We recommend using estimateIntrinsics when projecting Zivid point clouds onto a 2D image plane.

This will result in smaller re-projection errors than using hard-coded intrinsics. However, keep in mind that using our correction filters, e.g. Gaussian Smoothing(高斯平滑) and Contrast Distortion Filter(对比度失真过滤器) will increase re-projection errors. When using estimated intrinsics on a point cloud captured without correction filters, less than 1 pixel of re-projection error is expected. When using correction filters, the re-projection errors will be larger for the significantly corrected points. However, for most points, there should be less than 1 pixel re-projection error.

降采样或子采样 2D

When you sub-/downsample the 2D data you can regain the direct 1-to-1 mapping between 2D and 3D, see Sampling (3D).

When downsampling a full-resolution 2D image it is important to understand which pixels are used to generate the downsampled pixels. If you use pixels symmetrically around the wanted pixel, then the averaged pixel should land on the pixel location that corresponds to the 3D data. This corresponds best with intrinsics that the SDK provides.

For more information see:

从2D图像估计3D位姿

When projecting 2D pixel into 3D space, we recommend using estimateIntrinsics from the point cloud.

Compared to hard-coded camera intrinsics, the intrinsics estimated from the point cloud will provide better results. However, we rather recommend estimating the pose directly from the point cloud as this will be more accurate.

2D图像处理算法

如果您的应用只使用2D数据,那么硬编码的内参和从点云估计的内参都可以正常工作。一个例子是校正2D图像的失真来校正镜头失真,从而保证直线的检测。

版本历史

SDK

变更

2.10.0

Monochrome Capture(单色捕获) 需要在使用它们之前修改内参。