2D图像投影 (LED)
介绍
Zivid 3D 相机的主要组件是 2D 彩色相机和 LED 投影仪。 2D 图像像素对应于相机传感器像素(传感器收集光子的部分)。类似地,2D 投影仪图像像素对应于投影仪像素(发射光子的投影仪部分)。本教程介绍如何使用投影仪将彩色图像投影到场景上。
创建投影仪图像
要创建投影仪图像,您需要知道投影仪图像分辨率。通过连接到相机并检索投影仪分辨率可以轻松完成此操作:
std::cout << "Connecting to camera" << std::endl;
auto camera = zivid.connectCamera();
std::cout << "Retrieving the projector resolution that the camera supports" << std::endl;
const auto projectorResolution = Zivid::Projection::projectorResolution(camera);
Console.WriteLine("Connecting to camera");
using (var camera = zivid.ConnectCamera())
{
Console.WriteLine("Retrieving the projector resolution that the camera supports");
var projectorResolution = Zivid.NET.Projection.Projection.ProjectorResolution(camera);
print("Connecting to camera")
with app.connect_camera() as camera:
print("Retrieving the projector resolution that the camera supports")
projector_resolution = zivid.projection.projector_resolution(camera)
2D 投影仪图像
下一步是创建一个 Zivid::Image<Zivid::ColorBGRA>
。下面演示了如何从头开始创建 Zivid 图像以及如何将 OpenCV 图像转换为 Zivid 图像。
您可以通过从文件(例如 PNG)加载或从头开始创建来创建 Zivid 图像。
这是如何加载 Zivid 图像的示例。限制是图像分辨率必须与 Zivid 相机投影仪分辨率相匹配。
相机 |
分辨率 |
---|---|
Zivid 2+ |
1280 x 720 |
Zivid 2 |
1000 x 720 |
std::string projectorImageFileForGivenCamera = getProjectorImageFileForGivenCamera(camera);
std::cout << "Reading 2D image (of resolution matching the Zivid camera projector resolution) from file: "
<< projectorImageFileForGivenCamera << std::endl;
const auto projectorImageForGivenCamera = Zivid::Image<Zivid::ColorBGRA>(projectorImageFileForGivenCamera);
string projectorImageFileForGivenCamera = GetProjectorImageFileForCamera(camera);
Console.WriteLine("Reading 2D image (of resolution matching the Zivid camera projector resolution) from file: " + projectorImageFileForGivenCamera);
var projectorImageForGivenCamera = new Zivid.NET.ImageBGRA(projectorImageFileForGivenCamera);
projector_image_file_for_given_camera = get_projector_image_file_for_camera(camera)
print(
f"Reading 2D image (of resolution matching the Zivid camera projector resolution) from file: {projector_image_file_for_given_camera}"
)
projector_image_for_given_camera = zivid.Image.load(projector_image_file_for_given_camera, "bgra")
这是如何创建所有像素均为红色的 Zivid Image 的示例。
const auto redColor = Zivid::ColorBGRA(0, 0, 255, 255);
auto projectorImage = createProjectorImage(projectorResolution, redColor);
Zivid::Image<Zivid::ColorBGRA> createProjectorImage(
const Zivid::Resolution &projectorResolution,
const Zivid::ColorBGRA &ZividColor)
{
const std::vector<Zivid::ColorBGRA> imageData(projectorResolution.size(), ZividColor);
Zivid::Image<Zivid::ColorBGRA> projectorImage{ projectorResolution, imageData.begin(), imageData.end() };
return projectorImage;
}
var redColor = new Zivid.NET.ColorBGRA { b = 0, g = 0, r = 255, a = 255 };
var projectorImage = CreateProjectorImage(projectorResolution, redColor);
static Zivid.NET.ImageBGRA CreateProjectorImage(Zivid.NET.Resolution resolution, Zivid.NET.ColorBGRA color)
{
var pixelArray = new Zivid.NET.ColorBGRA[resolution.Height, resolution.Width];
for (ulong y = 0; y < resolution.Height; y++)
{
for (ulong x = 0; x < resolution.Width; x++)
{
pixelArray[y, x] = color;
}
}
var projectorImage = new Zivid.NET.ImageBGRA(pixelArray);
return projectorImage;
}
red_color = (0, 0, 255, 255)
projector_image = create_projector_image(projector_resolution, red_color)
def create_projector_image(resolution: Tuple, color: Tuple) -> np.ndarray:
"""Create projector image (numpy array) of given color.
Args:
resolution: projector resolution
color: bgra
Returns:
An image (numpy array) of color given by the bgra value
"""
projector_image = np.full((resolution[0], resolution[1], len(color)), color, dtype=np.uint8)
return projector_image
您可以通过从文件(例如 PNG)加载或从头开始创建来获得一副 OpenCV 图像。
此示例使用 OpenCV 加载图像,然后将其转换为 Zivid 图像。使用 OpenCV 的好处是可以非常轻松地调整任意分辨率的图像大小以适应 Zivid 相机投影仪分辨率。
std::string imageFile = std::string(ZIVID_SAMPLE_DATA_DIR) + "/ZividLogo.png";
std::cout << "Reading 2D image (of arbitrary resolution) from file: " << imageFile << std::endl;
const auto inputImage = cv::imread(imageFile, cv::IMREAD_UNCHANGED);
Zivid::Image<Zivid::ColorBGRA> resizeAndCreateProjectorImage(
const cv::Mat &inputImage,
const Zivid::Resolution &projectorResolution)
{
cv::Mat projectorImageResized;
cv::Mat projectorImageBGRA;
cv::resize(
inputImage,
projectorImageResized,
cv::Size(projectorResolution.width(), projectorResolution.height()),
cv::INTER_LINEAR);
cv::cvtColor(projectorImageResized, projectorImageBGRA, cv::COLOR_BGR2BGRA);
std::cout << "Creating a Zivid::Image from the OpenCV image" << std::endl;
Zivid::Image<Zivid::ColorBGRA> projectorImage{ projectorResolution,
projectorImageBGRA.datastart,
projectorImageBGRA.dataend };
return projectorImage;
}
image_file = get_sample_data_path() / "ZividLogo.png"
print("Reading 2D image (of arbitrary resolution) from file: ")
input_image = cv2.imread(str(image_file))
if input_image is None:
raise RuntimeError(f"File {image_file} not found or couldn't be read.")
def _resize_and_create_projector_image(image_to_resize: np.ndarray, final_resolution: Tuple) -> np.ndarray:
"""Resizes an image to a given resolution.
Args:
image_to_resize: openCV image that needs to be resized
final_resolution: resolution after resizing
Returns:
An image with a resolution that matches the projector resolution
"""
resized_image = cv2.resize(
image_to_resize, (final_resolution[1], final_resolution[0]), interpolation=cv2.INTER_LINEAR
)
projector_image = cv2.cvtColor(resized_image, cv2.COLOR_BGR2BGRA)
return projector_image
在此示例中,创建了一个空白的 OpenCV 图像,然后将其转换为 Zivid Image。
std::cout << "Creating a blank projector image with resolution: " << projectorResolution.toString()
<< std::endl;
const cv::Scalar backgroundColor{ 0, 0, 0, 255 };
auto projectorImageOpenCV = cv::Mat{ static_cast<int>(projectorResolution.height()),
static_cast<int>(projectorResolution.width()),
CV_8UC4,
backgroundColor };
std::cout << "Creating a Zivid::Image from the OpenCV image" << std::endl;
const Zivid::Image<Zivid::ColorBGRA> projectorImage{ projectorResolution,
projectorImageOpenCV.datastart,
projectorImageOpenCV.dataend };
print(f"Creating a blank projector image with resolution: {projector_resolution}")
background_color = (0, 0, 0, 255)
projector_image = np.full(
(projector_resolution[0], projector_resolution[1], len(background_color)), background_color, dtype=np.uint8
)
现在可以投影图像了。请注意,该图像是在不考虑任何 3D 数据的情况下创建的。
从 3D 捕获中获取的 2D 投影仪图像
如果您想要将某个 3D 对象投影到场景上,那么从 3D 数据创建投影仪图像会非常有用。或者,您可能希望将某些内容投影到场景中的特定点、曲面或可以从点云检测到的任何其他 3D 特征上。要实现这一点则需要知道 3D 点和投影仪像素之间的关联性。如果您不需要从 3D 数据创建 2D 投影仪图像,您可以直接转到 开始投影
在此示例中,我们在 Zivid 标定板的棋盘格的角落处投射了绿色小圆圈图案。下图展示了预期的最终结果。
由于棋盘格尺寸已知,我们可以创建一个点网格 (7 x 6),在代表棋盘格角落的点之间具有正确的间距 (30 毫米)。
std::cout << "Creating a grid of 7 x 6 points (3D) with 30 mm spacing to match checkerboard corners"
<< std::endl;
const auto gridInCheckerboardFrame = checkerboardGrid();
std::vector<cv::Matx41f> checkerboardGrid()
{
std::vector<cv::Matx41f> points;
for(int x = 0; x < 7; x++)
{
for(int y = 0; y < 6; y++)
{
const float xPos = x * 30.0F;
const float yPos = y * 30.0F;
points.emplace_back(xPos, yPos, 0.0F, 1.0F);
}
}
return points;
}
print("Creating a grid of 7 x 6 points (3D) with 30 mm spacing to match checkerboard corners")
grid_points_in_checkerboard_frame = _checkerboard_grid()
def _checkerboard_grid() -> List[np.ndarray]:
"""Create a list of points corresponding to the checkerboard corners in a Zivid calibration board.
Returns:
points: List of 4D points (X,Y,Z,W) for each corner in the checkerboard, in the checkerboard frame
"""
x = np.arange(0, 7) * 30.0
y = np.arange(0, 6) * 30.0
xx, yy = np.meshgrid(x, y)
z = np.zeros_like(xx)
w = np.ones_like(xx)
points = np.dstack((xx, yy, z, w)).reshape(-1, 4)
return list(points)
此时,将创建一个 3D 点网格,但与现实世界没有关联。因此,需要将网格转换到 Zivid 标定板相对于相机的位姿。这可以通过估计标定板位姿并将网格转换到该位姿来轻松实现:
std::cout << "Estimating checkerboard pose" << std::endl;
const auto cameraToCheckerboardTransform = detectionResult.pose().toMatrix();
std::cout << "Transforming the grid to the camera frame" << std::endl;
const auto pointsInCameraFrame =
transformGridToCameraFrame(gridInCheckerboardFrame, cameraToCheckerboardTransform);
std::vector<Zivid::PointXYZ> transformGridToCameraFrame(
const std::vector<cv::Matx41f> &grid,
const Zivid::Matrix4x4 &cameraToCheckerboardTransform)
{
std::vector<Zivid::PointXYZ> pointsInCameraFrame;
const auto transformationMatrix = cv::Matx44f{ cameraToCheckerboardTransform.data() };
for(const auto &point : grid)
{
const auto transformedPoint = transformationMatrix * point;
pointsInCameraFrame.emplace_back(transformedPoint(0, 0), transformedPoint(1, 0), transformedPoint(2, 0));
}
return pointsInCameraFrame;
}
print("Estimating checkerboard pose")
camera_to_checkerboard_transform = detection_result.pose().to_matrix()
print("Transforming the grid to the camera frame")
grid_points_in_camera_frame = _transform_grid_to_camera_frame(
grid_points_in_checkerboard_frame, camera_to_checkerboard_transform
)
def _transform_grid_to_camera_frame(
grid: List[np.ndarray], camera_to_checkerboard_transform: np.ndarray
) -> List[np.ndarray]:
"""Transform a list of grid points to the camera frame.
Args:
grid: List of 4D points (X,Y,Z,W) for each corner in the checkerboard, in the checkerboard frame
camera_to_checkerboard_transform: 4x4 transformation matrix
Returns:
List of 3D grid points in the camera frame
"""
points_in_camera_frame = []
for point_in_checkerboard_frame in grid:
point_in_camera_frame = camera_to_checkerboard_transform @ point_in_checkerboard_frame
points_in_camera_frame.append(point_in_camera_frame[:3])
return points_in_camera_frame
函数 Zivid::Projection::pixelsFrom3DPoints()
使用 Zivid 相机的内部校准将相机参考中的 3D 点转换为投影仪像素。获得变换点的向量后,这些 3D 点将转换为投影仪像素,如下所示:
std::cout << "Getting projector pixels (2D) corresponding to points (3D) in the camera frame" << std::endl;
const auto projectorPixels = Zivid::Projection::pixelsFrom3DPoints(camera, pointsInCameraFrame);
print("Getting projector pixels (2D) corresponding to points (3D) in the camera frame")
projector_pixels = zivid.projection.pixels_from_3d_points(camera, grid_points_in_camera_frame)
下一步是创建投影仪图像并在获得的投影仪像素坐标上绘制绿色圆圈。
std::cout << "Creating a blank projector image with resolution: " << projectorResolution.toString()
<< std::endl;
const cv::Scalar backgroundColor{ 0, 0, 0, 255 };
auto projectorImageOpenCV = cv::Mat{ static_cast<int>(projectorResolution.height()),
static_cast<int>(projectorResolution.width()),
CV_8UC4,
backgroundColor };
std::cout << "Drawing circles on the projector image for each grid point" << std::endl;
const cv::Scalar circleColor{ 0, 255, 0, 255 };
drawFilledCircles(projectorImageOpenCV, projectorPixels, 2, circleColor);
std::cout << "Creating a Zivid::Image from the OpenCV image" << std::endl;
const Zivid::Image<Zivid::ColorBGRA> projectorImage{ projectorResolution,
projectorImageOpenCV.datastart,
projectorImageOpenCV.dataend };
print(f"Creating a blank projector image with resolution: {projector_resolution}")
background_color = (0, 0, 0, 255)
projector_image = np.full(
(projector_resolution[0], projector_resolution[1], len(background_color)), background_color, dtype=np.uint8
)
print("Drawing circles on the projector image for each grid point")
circle_color = (0, 255, 0, 255)
_draw_filled_circles(projector_image, projector_pixels, 2, circle_color)
可以将投影仪图像保存到磁盘以供以后使用。图像可以保存为 PNG、JPEG、BMP 等格式。
const std::string projectorImageFile = "ProjectorImage.png";
std::cout << "Saving the projector image to file: " << projectorImageFile << std::endl;
projectorImage.save(projectorImageFile);
projector_image_file = "ProjectorImage.png"
print(f"Saving the projector image to file: {projector_image_file}")
cv2.imwrite(projector_image_file, projector_image)
开始投影
下面展示了如何投影图像。
auto projectedImageHandle = Zivid::Projection::showImage(camera, projectorImage);
var projectedImageHandle = Zivid.NET.Projection.Projection.ShowImage(camera, projectorImage);
project_image_handle = zivid.projection.show_image_bgra(camera, projector_image)
备注
只要图像句柄保持活动状态,图像就会连续投影。
投影时捕获并保存 2D 图像
投影仪和2D相机可以单独控制。因此,当投影仪进行投影时,可以捕获场景的2D图像(场景上有投影图像的情况下)。
{ // A Local Scope to handle the projected image lifetime
auto projectedImageHandle = Zivid::Projection::showImage(camera, projectorImage);
const Zivid::Settings2D settings2D{ Zivid::Settings2D::Acquisitions{ Zivid::Settings2D::Acquisition{
Zivid::Settings2D::Acquisition::Brightness{ 0.0 },
Zivid::Settings2D::Acquisition::ExposureTime{ std::chrono::microseconds{ 20000 } },
Zivid::Settings2D::Acquisition::Aperture{ 2.83 } } } };
std::cout << "Capturing a 2D image with the projected image" << std::endl;
const auto frame2D = projectedImageHandle.capture(settings2D);
const std::string capturedImageFile = "CapturedImage.png";
std::cout << "Saving the captured image: " << capturedImageFile << std::endl;
frame2D.imageBGRA().save(capturedImageFile);
std::cout << "Press enter to stop projecting..." << std::endl;
std::cin.get();
} // projectedImageHandle now goes out of scope, thereby stopping the projection
with zivid.projection.show_image_bgra(camera, projector_image) as projected_image:
settings_2d = zivid.Settings2D()
settings_2d.acquisitions.append(
zivid.Settings2D.Acquisition(brightness=0.0, exposure_time=timedelta(microseconds=20000), aperture=2.83)
)
print("Capturing a 2D image with the projected image")
frame_2d = projected_image.capture(settings_2d)
captured_image_file = "CapturedImage.png"
print(f"Saving the captured image: {captured_image_file}")
frame_2d.image_bgra().save(captured_image_file)
input("Press enter to stop projecting ...")
停止投影
如果在句柄上调用 stop()
函数、句柄超出范围或者在相机上启动 3D 捕获,投影都将停止。
通过Projection Handle停止投影
auto projectedImageHandle = Zivid::Projection::showImage(camera, projectorImage);
std::cout << "Press enter to stop projecting using the \".stop()\" function." << std::endl;
std::cin.get();
projectedImageHandle.stop();
var projectedImageHandle = Zivid.NET.Projection.Projection.ShowImage(camera, projectorImage);
Console.WriteLine("Press enter to stop projecting using the \".Stop()\" function");
Console.ReadLine();
projectedImageHandle.Stop();
project_image_handle = zivid.projection.show_image_bgra(camera, projector_image)
input('Press enter to stop projecting using the ".stop()" function')
project_image_handle.stop()
通过上下文管理器(context manager)离开作用域来停止投影
{
projectorImage = createProjectorImage(projectorResolution, greenColor);
projectedImageHandle = Zivid::Projection::showImage(camera, projectorImage);
std::cout << "Press enter to stop projecting by leaving a local scope" << std::endl;
std::cin.get();
}
projectorImage = CreateProjectorImage(projectorResolution, greenColor);
using (projectedImageHandle = Zivid.NET.Projection.Projection.ShowImage(camera, projectorImage))
{
Console.WriteLine("Press enter to stop projecting by leaving a local scope");
Console.ReadLine();
}
void projecting(Zivid::Camera &camera, const Zivid::Image<Zivid::ColorBGRA> &projectorImageFunctionScope)
{
auto projectedImageHandle = Zivid::Projection::showImage(camera, projectorImageFunctionScope);
std::cout << "Press enter to stop projecting by leaving a function scope" << std::endl;
std::cin.get();
}
projector_image = create_projector_image(projector_resolution, green_color)
with zivid.projection.show_image_bgra(camera, projector_image):
input("Press enter to stop projecting with context manager")
通过触发 3D 捕获来停止投影
projectedImageHandle = Zivid::Projection::showImage(camera, projectorImage);
std::cout << "Press enter to stop projecting by performing a 3D capture" << std::endl;
std::cin.get();
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition() } };
camera.capture(settings);
projectedImageHandle = Zivid.NET.Projection.Projection.ShowImage(camera, projectorImage);
Console.WriteLine("Press enter to stop projecting by performing a 3D capture");
Console.ReadLine();
var settings = new Zivid.NET.Settings
{
Acquisitions = { new Zivid.NET.Settings.Acquisition { } },
};
using (var frame3D = camera.Capture(settings)) { }
project_image_handle = zivid.projection.show_image_bgra(camera, projector_image)
input("Press enter to stop projecting by performing a 3D capture")
settings = zivid.Settings()
settings.acquisitions.append(zivid.Settings.Acquisition())
camera.capture(settings)
Projection Brightness(投影仪亮度)
Zivid 固件旨在通过对光输出(投影仪亮度)施加限制来保障相机的使用寿命。
如果您的目标是在投影过程中最大限度地提高亮度,请将投影仪设置为仅使用其中一个 LED 通道。您可以通过将投影仪图像中的每个彩色像素设置为 仅 纯颜色值之一来实现此目的:红色 (255,0,0),绿色 (0,255, 0),或者 蓝色 (0,0,255) 。在不需要投射光线的区域,请将像素设置为黑色 (0,0,0)。
小技巧
当谈到人类的感知时,绿色是迄今为止最好的选择,因为我们的眼睛对绿色比对红色和蓝色更敏感。
当投射白光或红、绿、蓝光的任何其他组合时,相机固件将自动降低光输出(投影仪亮度)。即使几乎所有像素都被设置为例如纯绿色(0,255,0),这种情况也会发生,只有一个例外,即像素稍微偏离纯绿色或黑色,例如(0,255,1)。
代码示例
要获得投影功能的实践经验,请查看以下代码示例:
版本历史
SDK |
变更 |
---|---|
2.12 |
2D 图像投影 API 已从实验性功能的范畴中移除。 |
2.11.0 |
添加了对 C# 和 Python 的支持。 |
2.10 |
新增了实验性功能 2D 图像投影 API。 |