Camera Intrinsics

Zivid camera model is more complex, using more intrinsic parameters, than some well-known pinhole camera models (e.g., OpenCV camera model). In addition, our cameras utilize a rolling calibration, which means that the point clouds are generated as a function of aperture, temperature, and color channels.

Zivid camera model is proprietary, which is why our internal camera intrinsics are not available in the SDK. However, we provide approximations of OpenCV and Halcon models since many machine vision algorithms rely on them.

Caution

In general, we tend to discourage using camera intrinsics. One reason is that valuable information is lost in the approximation. Another reason is that there are often better methods to achieve the same result. Because of the complexity of the topic, we advise reaching out to us to discuss your use case. Intrinsics are complex; use point cloud whenever possible!

OpenCV camera intrinsics

Zivid SDK offers a few options to get OpenCV camera intrinsics.

intrinsics(camera)

Return intrinsics that corresponds to the default value of Zivid::Settings::Sampling::Pixel for the camera.

intrinsics(camera, settings)

Return intrinsics that corresponds to the value of Zivid::Settings::Sampling::Pixel used in settings.

intrinsics(camera, settings_2d)

Return intrinsics that corresponds to the capture with settings_2d.

estimateIntrinsics(frame)

Returns intrinsics that corresponds to the value of Zivid::Settings::Sampling::Pixel that was used to capture the frame.

The hard-coded camera intrinsics are given for a single aperture and a single temperature. This temperature and apertures correspond to what we consider typical conditions.

Camera Model

Lens Temperature (°C)

Aperture

Zivid 2+

35

2.68

Zivid 2

35

2.30

To get the hard-coded camera intrinsics, you first have to connect to the camera:

Go to source

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
Go to source

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
Go to source

source

print("Connecting to camera")
camera = app.connect_camera()

Then, you can get the default OpenCV camera intrinsics (this function returns right away):

Go to source

source

std::cout << "Getting camera intrinsics" << std::endl;
const auto intrinsics = Zivid::Experimental::Calibration::intrinsics(camera);
Go to source

source

Console.WriteLine("Getting camera intrinsics");
var intrinsics = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera);
Go to source

source

print("Getting camera intrinsics")
intrinsics = zivid.experimental.calibration.intrinsics(camera)

Alternatively, you can get the OpenCV camera intrinsics corresponding to the value of Zivid::Settings::Sampling::Pixel used in settings:

Go to source

source

const auto settingsSubsampled =
    Zivid::Settings{ Zivid::Settings::Engine::phase,
                     Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} },
                     Zivid::Settings::Sampling::Pixel::blueSubsample2x2 };
const auto fixedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::intrinsics(camera, settingsSubsampled);
Go to source

source

var settingsSubsampled = new Zivid.NET.Settings();
settingsSubsampled.Acquisitions.Add(new Zivid.NET.Settings.Acquisition { });
settingsSubsampled.Sampling.Pixel = Zivid.NET.Settings.SamplingGroup.PixelOption.BlueSubsample2x2;
var fixedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.Intrinsics(camera, settingsSubsampled);
Go to source

source

settings_subsampled = zivid.Settings(
    acquisitions=[zivid.Settings.Acquisition()],
    sampling=zivid.Settings.Sampling(pixel=zivid.Settings.Sampling.Pixel.blueSubsample2x2),
)
fixed_intrinsics_for_subsampled_settings = zivid.experimental.calibration.intrinsics(camera, settings_subsampled)

Or, similarly for 2D settings:

const auto settings2D = Zivid::Settings2D{ Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{} }};
const auto fixedIntrinsicsForSettings2D = Zivid::Experimental::Calibration::intrinsics(camera, settings2D);
var settings2D = new Zivid.NET.Settings2D
{
   Acquisitions = { new Zivid.NET.Settings2D.Acquisition { } }
};
var fixedIntrinsicsForSettings2D = Zivid.NET.Experimental.Calibration.Intrinsics(camera, settings2D);
settings_2d = zivid.Settings2D()
fixed_intrinsics_for_settings_2d = calibration.intrinsics(camera, settings_2d)

Caution

The hard-coded OpenCV intrinsics are fixed and does not adapt to the environment, unlike the calibration of our point clouds. Therefore, they will not correspond perfectly to a capture taken with a different aperture at a different temperature.

The ambient temperature in production can vary, and the camera temperature will then vary with it. Thermal Stabilization helps by regulating the internal temperature. Nevertheless, the lens temperature will likely be significantly lower at the start of the deployment if the camera has not had time to warm up than after a stabilization period. Due to the temperature difference, the hard-coded intrinsics will correspond less with the point cloud data.

When it comes to the aperture, the situation becomes even more complicated if a point cloud is captured using the HDR mode where each acquisition uses a different aperture. This is because it is not trivial to determine the intrinsics for such a case.

Because of these complexities, we provide an alternative method to get OpenCV camera intrinsics: the camera intrinsics estimated from the point cloud.

These camera intrinsics are estimated from the point cloud. Using our calibration, a point cloud is generated as a function of temperature and aperture. Therefore, the estimated intrinsics indirectly utilize the temperature and the aperture.

To get the estimated OpenCV intrinsics, you first have to connect to the camera:

Go to source

source

std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
Go to source

source

Console.WriteLine("Connecting to camera");
var camera = zivid.ConnectCamera();
Go to source

source

print("Connecting to camera")
camera = app.connect_camera()

Then you have to capture a frame:

Go to source

source

const auto frame = camera.capture(settings);
Go to source

source

var frame = camera.Capture(settings);
Go to source

source

with camera.capture(settings=settings) as frame:

Then, you can estimate the intrinsics from the frame (because there is computation involved, this function does not return right away):

Go to source

source

const auto estimated_intrinsics = Zivid::Experimental::Calibration::estimateIntrinsics(frame);
Go to source

source

var estimatedIntrinsics = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frame);
Go to source

source

estimated_intrinsics = zivid.experimental.calibration.estimate_intrinsics(frame)

Note that if you set Zivid::Settings::Sampling::Pixel to e.g. Zivid::Settings::Sampling::Pixel::blueSubsample2x2, then you get correct (subsampled) intrinsics with the same function:

Go to source

source

const auto settingsSubsampled =
    Zivid::Settings{ Zivid::Settings::Engine::phase,
                     Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} },
                     Zivid::Settings::Sampling::Pixel::blueSubsample2x2 };
const auto frame = camera.capture(settingsSubsampled);
const auto estimatedIntrinsicsForSubsampledSettings =
    Zivid::Experimental::Calibration::estimateIntrinsics(frame);
Go to source

source

var settingsSubsampled = new Zivid.NET.Settings();
settingsSubsampled.Acquisitions.Add(new Zivid.NET.Settings.Acquisition { });
settingsSubsampled.Sampling.Pixel = Zivid.NET.Settings.SamplingGroup.PixelOption.BlueSubsample2x2;
var frameSubsampled = camera.Capture(settingsSubsampled);
var estimatedIntrinsicsForSubsampledSettings = Zivid.NET.Experimental.Calibration.Calibrator.EstimateIntrinsics(frameSubsampled);
Go to source

source

settings_subsampled = zivid.Settings(
    acquisitions=[zivid.Settings.Acquisition()],
    sampling=zivid.Settings.Sampling(pixel=zivid.Settings.Sampling.Pixel.blueSubsample2x2),
)
frame = camera.capture(settings_subsampled)
estimated_intrinsics_for_subsampled_settings = zivid.experimental.calibration.estimate_intrinsics(frame)

Saving camera intrinsics

It is possible to save OpenCV camera intrinsics to a YML file:

Go to source

source

const std::string outputFile = "Intrinsics.yml";
std::cout << "Saving camera intrinsics to file: " << outputFile << std::endl;
intrinsics.save(outputFile);
Go to source

source

var intrinsicsFile = "Intrinsics.yml";
Console.WriteLine("Saving camera intrinsics to file: " + intrinsicsFile);
intrinsics.Save(intrinsicsFile);
Go to source

source

output_file = "Intrinsics.yml"
print(f"Saving camera intrinsics to file: {output_file}")
intrinsics.save(output_file)

Halcon camera intrinsics

Halcon uses a camera model different than Zivid and OpenCV camera models. There are two options to get Halcon intrinsics and both are based on approximation from the OpenCV model.

Caution

If you need to use Halcon intrinsics (Camera internal parameters), use one of the approaches described below to get them. Do not calibrate our 2D camera in Halcon to get Halcon intrinsics; it does not work well with our camera.

Requirements:

  • Python installed and skills to run Python scripts

  • Zivid-Python installed

The simplest method to get Halcon intrinsics for your camera is to run convert_intrinsics_opencv_to_halcon.py code sample. Read the sample description with a focus on example when reading from camera.

Note

This method is limited to getting the hard-coded camera intrinsics.

Requirements:

  • Python installed and skills to run Python scripts

  • Skills to build C++ or C# code samples

The other method to get Halcon intrinsics for your camera is to load the OpenCV camera intrinsics from the YML file and convert them to Halcon format.

To get OpenCV camera intrinsics and save them to a file, run one of the following samples:

To get OpenCV camera intrinsics and save them to a file, run GetCameraIntrinsics.cpp. See Configure C++ Samples With CMake and Build Them in Visual Studio in Windows.

To get OpenCV camera intrinsics and save them to a file, run GetCameraIntrinsics.cs. See Build C# Samples using Visual Studio for building C# samples.

To get OpenCV camera intrinsics and save them to a file, run get_camera_intrinsics.py.

Then, run convert_intrinsics_opencv_to_halcon.py code sample. Read the sample description with a focus on Example when reading from file.

Note

This method allows getting both hard-coded camera intrinsics and camera intrinsics estimated from the point cloud. However, for the latter, this assumes saving and loading from the file for each capture you need intrinsics for.

Which camera intrinsics should I use?

Hard-coded camera intrinsics

We recommend using hard-coded camera intrinsics only assuming the following:

  • You get the color image from either of:

    • 2D capture

    • Single acquisition 3D capture

    • Multi-acquisition HDR 3D capture with Color Mode set to UseFirstAcquisition.

  • For your relevant acquisition (one of the three options above) you use the aperture value for which we provide the hard-coded intrinsics (see the table).

  • For temperatures close to the room temperature.

  • For applications where you use only 2D data, e.g., undistorting the image to detect straight lines.

Camera intrinsics estimated from point cloud

We recommend using estimated intrinsics assuming:

  • You get the color image from multi-acquisition HDR 3D capture with Color Mode set to Automatic or ToneMapping. This is because the point cloud is likely a result of acquisitions with different apertures and at a temperature different than the one corresponding to the hard-coded intrinsics.

  • Any other use case when using OpenCV intrinsics is necessary, for example:

    • projecting 3D data to a 2D image using projectPoints().

    • estimating 3D pose from a 2D image using solvePnP().

Note

The estimated camera intrinsics also work well for all cases where we recommend the hard-coded camera intrinsics. Therefore, for simplicity, you could always use the estimateIntrinsics function. However, estimating intrinsics from the point cloud takes time, whereas getting the hard-coded intrinsics from the camera is instantaneous.

In addition, the estimated intrinsics uses 3D capture settings. This means that the intrinsics corresponds to the 3D resolution. If you take a separate 2D capture where the resolution is different from 3D then you will get incorrect intrinsics. In this case use the hard-coded method.

2D and 3D capture with different resolution

You will require different intrinsics based on the resolution of your 2D image. When you perform a Monochrome Capture you may get different resolution in 2D and 3D. If you take the 2D data from a 3D capture then estimateIntrinsics will return the intrinsics you want. If you get 2D data from a separate 2D capture, then you can call intrinsics(camera, settings_2d), where settings_2d are the settings used for the 2D capture.

Note

estimateIntrinsics provides the best intrinsics. Thus, even when you capture separate 2D we recommend to use these intrinsics. If the 2D resolution is different however you will have to modify the intrinsics accordingly. In this case please contact us at customersuccess@zivid.com.

Instead of creating intrinsics to match the resolution in 2D you may downsample or subsample the 2D image to match the 3D resolution.

Hand-Eye Calibration

In general, the hand-eye calibration we recommend is our Hand-Eye Calibration, as it is best suited for Zivid cameras. Our hand-eye calibration method does not require camera intrinsics; however, many other methods do. If you use one of the other methods, we recommend using camera intrinsics estimated from the point cloud. Read more about choosing the correct Hand-Eye calibration method.

Projecting 3D points to a 2D image plane

When projecting Zivid point clouds onto a 2D image plane, using estimated intrinsics will result in smaller re-projection errors than using hard-coded intrinsics. However, keep in mind that using our correction filters, e.g. Gaussian Smoothing and Contrast Distortion Filter will result in higher re-projection errors. When using estimated intrinsics on a point cloud captured without correction filters, less than 1 pixel of re-projection error is expected. When using correction filters, the re-projection errors will be larger for the significantly corrected points. However, for most points, there should be less than 1 pixel re-projection error.

Note

Downsampled or Subsampled 2D

When you subsample the 2D data you get direct 1-to-1 mapping between 2D and 3D from Monochrome Capture.

Now consider downsampling a full-resolution 2D image. It is important to understand which pixels are used to generate the downsampled pixels. If you use pixels symmetrically around the wanted pixel, then the averaged pixel should land on the pixel location that corresponds to the 3D data. This corresponds best with intrinsics that the SDK provides.

For more information see Mapping between a full-resolution 2D image and a subsampled point cloud.

Estimating 3D pose from a 2D image

Let’s assume you are estimating a 3D pose of an object from a 2D image, e.g., using solvePnP() in OpenCV. Compared to hard-coded camera intrinsics, the intrinsics estimated from the point cloud will provide better results. However, we do not recommend estimating a pose from a 2D image, as it is possible to estimate the pose more accurately directly from the point cloud.

2D image processing algorithms

If you only use 2D data for your application, the hard-coded intrinsics and those estimated from the point cloud will work fine. An example is undistorting the 2D image to correct the lens distortion to guarantee the detection of straight lines.

Version History

SDK

Changes

2.10.0

Monochrome Capture requires modifications to the intrinsics before using them.