Camera Intrinsics
Zivid camera model is more complex, using more intrinsic parameters, than some well-known pinhole camera models (e.g., OpenCV camera model). In addition, our cameras utilize a rolling calibration, which means that the point clouds are generated as a function of aperture, temperature, and color channels.
3D-to-2D Mapping in Zivid Cameras
Every camera has a physical relationship between 3D points in space and their corresponding 2D pixel indices. For Zivid cameras:
Same Resolution: When the 2D and 3D data have the same resolution, there is a direct 1:1 mapping between the two.
Different Resolutions: If the 2D and 3D resolutions differ, the mapping depends on the following settings:
Settings2D::Sampling::Pixel
Zivid::Settings::Sampling::Pixel
To maintain a 1:1 correspondence between 2D and 3D data, you can resample the 3D point cloud using Settings::Processing::Resampling
setting.
For more information, see Resampling.
Alternatively, if resampling is not performed, you can use the pixelMapping(camera, settings)
function to get correct point-to-pixel mapping for any sampling configuration in 3D and full resolution 2D..
Caution
We advise against using camera intrinsics for the following reasons:
Loss of Information: Approximations can result in a loss of valuable information.
Better Alternatives: More effective methods are often available to achieve similar or better results.
We recommend using the point cloud data directly and pixel mapping instead of intrinsics.
Zivid camera model is proprietary, which is why our internal camera intrinsics are not available in the SDK. However, since many machine vision algorithms rely on standard models, Zivid provides approximations of the OpenCV and Halcon intrinsics models for compatibility.
OpenCV camera intrinsics
Zivid SDK offers a few options to get OpenCV camera intrinsics.
Function name |
Resolution the returned intrinsics correspond to |
---|---|
|
Default value of |
|
Combination of |
|
Value of |
|
Combination of |
We recommend estimating the intrinsics from the point cloud to get the most accurate results. The hard-coded camera intrinsics are given for a single aperture and a single temperature. Therefore, it will not be as accurate as the estimated intrinsics. See the list below for values for each camera model.
Camera Model |
Lens Temperature (°C) |
Aperture |
---|---|---|
Zivid 2+ |
35 |
2.68 |
Zivid 2 |
35 |
2.30 |
The camera intrinsics are estimated from the point cloud, taking into account the temperature and aperture used during the capture.
To get the estimated OpenCV intrinsics, you first have to connect to the camera:
std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
print("Connecting to camera")
camera = app.connect_camera()
Call capture function to get the frame:
const auto frame = camera.capture(settings);
var frame = camera.Capture(settings);
with camera.capture(settings=settings) as frame:
Then, you can estimate the intrinsics from the frame:
const auto estimated_intrinsics = Zivid::Experimental::Calibration::estimateIntrinsics(frame);
var estimatedIntrinsics = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frame);
estimated_intrinsics = zivid.experimental.calibration.estimate_intrinsics(frame)
The estimate intrinsics function takes sampling strategies into account.
For example, if you set Zivid::Settings::Sampling::Pixel
to by2x2
and Zivid::Settings::Resampling
to disiabled
, you get correct (subsampled) intrinsics:
const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto frame = camera.capture(settingsSubsampled);
const auto estimatedIntrinsicsForSubsampledSettings =
Zivid::Experimental::Calibration::estimateIntrinsics(frame);
var settingsSubsampled = SubsampledSettingsForCamera(camera);
var frameSubsampled = camera.Capture(settingsSubsampled);
var estimatedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frameSubsampled);
settings_subsampled = _subsampled_settings_for_camera(camera)
frame = camera.capture(settings_subsampled)
estimated_intrinsics_for_subsampled_settings = zivid.experimental.calibration.estimate_intrinsics(frame)
Hard-coded intrinsics are given for a single aperture and temperature. To get the hard-coded camera intrinsics, first connect to the camera:
std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
print("Connecting to camera")
camera = app.connect_camera()
Then, you can get the default OpenCV camera intrinsics:
std::cout << "Getting camera intrinsics" << std::endl;
const auto intrinsics = Zivid::Experimental::Calibration::intrinsics(camera);
Console.WriteLine("Getting camera intrinsics");
var intrinsics = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera);
print("Getting camera intrinsics")
intrinsics = zivid.experimental.calibration.intrinsics(camera)
If you are sampling, you can get the OpenCV camera intrinsics corresponding to the combination of Zivid::Settings::Sampling::Pixel
and Zivid::Settings::Resampling
used in settings
:
const auto settingsSubsampled = subsampledSettingsForCamera(camera);
const auto fixedIntrinsicsForSubsampledSettings =
Zivid::Experimental::Calibration::intrinsics(camera, settingsSubsampled);
var settingsSubsampled = SubsampledSettingsForCamera(camera);
var fixedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settingsSubsampled);
settings_subsampled = _subsampled_settings_for_camera(camera)
fixed_intrinsics_for_subsampled_settings = zivid.experimental.calibration.intrinsics(camera, settings_subsampled)
For hard-coded intrinsics for 2D settings:
const auto settings2D =
Zivid::Settings2D{ Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{} } };
const auto fixedIntrinsicsForSettings2D = Zivid::Experimental::Calibration::intrinsics(camera, settings2D);
var settings2D = new Zivid.NET.Settings2D
{
Acquisitions = { new Zivid.NET.Settings2D.Acquisition { } }
};
var fixedIntrinsicsForSettings2D = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settings2D);
settings_2d = zivid.Settings2D()
settings_2d.acquisitions.append(zivid.Settings2D.Acquisition())
fixed_intrinsics_for_settings_2d = zivid.experimental.calibration.intrinsics(camera, settings_2d)
Caution
Hard-coded OpenCV intrinsics are fixed and do not adapt to changes in temperature or aperture, unlike estimateIntrinsics
.
This means they may not match captures taken under different conditions.
Ambient temperature variations affect the camera’s internal temperature. While Thermal Stabilization helps, the lens temperature can still differ significantly between the start of deployment and after stabilization. This temperature difference can cause discrepancies between hard-coded intrinsics and point cloud data. Aperture variations, especially in HDR mode where each acquisition uses a different aperture, further complicate the accuracy of hard-coded intrinsics.
Due to these complexities, we recommend estimating the camera intrinsics from the point cloud for more accurate results.
Saving camera intrinsics
You can save OpenCV camera intrinsics to a YML file using the following code:
const auto outputFile = "Intrinsics.yml";
std::cout << "Saving camera intrinsics to file: " << outputFile << std::endl;
intrinsics.save(outputFile);
var intrinsicsFile = "Intrinsics.yml";
Console.WriteLine("Saving camera intrinsics to file: " + intrinsicsFile);
intrinsics.Save(intrinsicsFile);
output_file = "Intrinsics.yml"
print(f"Saving camera intrinsics to file: {output_file}")
intrinsics.save(output_file)
Halcon camera intrinsics
Halcon uses a camera model different than Zivid and OpenCV camera models. There are two options to get Halcon intrinsics and both are based on approximation from the OpenCV model.
Caution
If you need to use Halcon intrinsics (camera internal parameters), use one of the approaches described below to get them. Do not calibrate our 2D camera in Halcon to get Halcon intrinsics; it does not work well with our camera.
Requirements:
Python installed and skills to run Python scripts
Zivid-Python installed
The simplest method to get Halcon intrinsics for your camera is to run convert_intrinsics_opencv_to_halcon.py code sample. Read the sample description with a focus on example when reading from camera.
Note
This method is limited to getting the hard-coded camera intrinsics.
Requirements:
Python installed and skills to run Python scripts
Skills to build C++ or C# code samples
The other method to get Halcon intrinsics for your camera is to load the OpenCV camera intrinsics from the YML file and convert them to Halcon format.
To get OpenCV camera intrinsics and save them to a file, run one of the following samples:
To get OpenCV camera intrinsics and save them to a file, run GetCameraIntrinsics.cpp. See Configure C++ Samples With CMake and Build Them in Visual Studio in Windows.
To get OpenCV camera intrinsics and save them to a file, run GetCameraIntrinsics.cs. See Build C# Samples using Visual Studio for building C# samples.
To get OpenCV camera intrinsics and save them to a file, run get_camera_intrinsics.py.
Then, run convert_intrinsics_opencv_to_halcon.py code sample. Read the sample description with a focus on Example when reading from file.
Note
This method allows getting both hard-coded camera intrinsics and camera intrinsics estimated from the point cloud. However, for the latter, this assumes saving and loading from the file for each capture you need intrinsics for.
Which camera intrinsics should I use?
In general, we recommend using the actual point cloud data and pixelMapping(camera, settings)
function rather than intrinsics.
If absolutely necessary, see the following guidelines for choosing the correct camera intrinsics.
Hard-coded camera intrinsics
We recommend using hard-coded camera intrinsics only in the following scenario:
One of the following cases:
You get the color image from a single 2D acquisition using
capture2D()
You get the color image from a single 2D acquisition using
capture2D()
and and the point cloud from a single 3D acquisition usingcapture3D()
You get the color image from a single 2D acquisition and the point cloud from a single 3D acquisition using
capture2D3D()
For apertures (used for both 2D and 3D) similar to the hard-coded one (see the table).
For temperatures close to the room temperature.
For applications where you use only 2D data, e.g., undistorting the image to detect straight lines.
Camera intrinsics estimated from point cloud
We recommend using estimated intrinsics in almost any scenario:
Regardless how you get the color image and the point cloud. This is because the point cloud is likely a result of acquisitions with different apertures and at a temperature different than the one corresponding to the hard-coded intrinsics.
Any use case when using OpenCV intrinsics is necessary, for example:
projecting 3D data to a 2D image using projectPoints().
estimating 3D pose from a 2D image using solvePnP().
Note
The estimated camera intrinsics also work well for all cases where we recommend the hard-coded camera intrinsics. However, estimating intrinsics requires some computation time, whereas getting the hard-coded intrinsics from the camera is instantaneous.
2D and 3D capture with different resolution
We recommend using pixel mapping and the point cloud data to go from 2D to 3D. This will handle the resolution difference correctly.
Another approach is to match the resolution of the 2D data to the 3D data by downsample or subsample the 2D image to match the 3D resolution. See more information at Sampling (3D), Sampling (2D), and Resampling.
If you still need to use intrinsics, we recommend using estimateIntrinsics(frame)
.
Hand-Eye Calibration
We recommend our own Hand-Eye Calibration calibration, as it is best suited for Zivid cameras.
Our hand-eye calibration method does not require camera intrinsics; however, many other methods do.
If you use one of the other methods, we recommend using estimateIntrinsics()
from the point cloud.
Read more about choosing the correct Hand-Eye calibration method.
Projecting 3D points to a 2D image plane
We recommend using estimateIntrinsics
when projecting Zivid point clouds onto a 2D image plane.
This will result in smaller re-projection errors than using hard-coded intrinsics. However, keep in mind that using our correction filters, e.g. Gaussian Smoothing and Contrast Distortion Filter will increase re-projection errors. When using estimated intrinsics on a point cloud captured without correction filters, less than 1 pixel of re-projection error is expected. When using correction filters, the re-projection errors will be larger for the significantly corrected points. However, for most points, there should be less than 1 pixel re-projection error.
Downsampled or Subsampled 2D
When you sub-/downsample the 2D data you can regain the direct 1-to-1 mapping between 2D and 3D, see Sampling (3D).
When downsampling a full-resolution 2D image it is important to understand which pixels are used to generate the downsampled pixels. If you use pixels symmetrically around the wanted pixel, then the averaged pixel should land on the pixel location that corresponds to the 3D data. This corresponds best with intrinsics that the SDK provides.
For more information see:
Estimating 3D pose from a 2D image
When projecting 2D pixel into 3D space, we recommend using estimateIntrinsics
from the point cloud.
Compared to hard-coded camera intrinsics, the intrinsics estimated from the point cloud will provide better results. However, we rather recommend estimating the pose directly from the point cloud as this will be more accurate.
2D image processing algorithms
If you only use 2D data for your application, the hard-coded intrinsics and those estimated from the point cloud will work fine. An example is undistorting the 2D image to correct the lens distortion to guarantee the detection of straight lines.
Version History
SDK |
Changes |
---|---|
2.10.0 |
Monochrome Capture requires modifications to the intrinsics before using them. |