性能注意事项

曝光时间的可靠性

备注

这仅适用于 Zivid Two 。 Zivid One+没有这个特点。

当使用HDR模式捕获时,Zivid Two将每次采集的曝光进行背靠背排列。当投影仪完成投影图案时,相机完成对场景的捕获。这使您可以可靠地估计何时完成场景的捕获以及何时可以更改场景。例如,这可以是移动机器人或移动场景中的对象。

例如,如果您使用由 \(10 ms\) 单次曝光时间和 \(5 ms\) 单次曝光时间组成的多次采集的HDR捕获,您可以使用以下算式估算总曝光时间。该例子中假定您正在使用phase engine进行3D捕获。Phase engine的每次曝光包含了13幅图案的图像。

\[((10 ms + 3 ms_{overhead}) * 13) + ((5ms + 3 ms_{overhead}) * 13) = 273 ms\]

备注

对于不同的采集之间的光圈的变化,在毫秒级范围内有额外的开销。

机器人可以在 \(273 ms\) 后移动。数字13表示相位phase engine中的图案数量。使用下表确定采集的图案数量:

Engine

图案

Phase

13

Stripe

33

警告

请注意,在高占空比下使用Zivid Two会触发热节流功能以防止相机过热。有关此主题的更多信息,请查看 投影仪亮度 页面。

哪些frame成员函数在什么时候返回?

捕获后,点云处理仍将在后台运行。

const auto frame = camera.capture(settings);

以下 Zivid::Frame 成员函数不依赖于需要完成的点云处理,因此将立即返回。

const auto frameSettings = frame.settings();
const auto cameraInfo = frame.cameraInfo();

另一方面,以下API将阻塞并等待点云数据可用。

const auto pointCloud = frame.pointCloud();
const auto frameInfo = frame.info();
const auto cameraState = frame.state();
frame.save("Frame.zdf");

优化机器人周期时间

对于相机安装在机器人上的拾取和放置应用,我们建议在机器人拾取物体后进行捕获。如果采用此策略,机器人只需要在相机采集期间停止运动。一旦采集完成,即捕获(capture)API返回时,机器人就可以开始放置对象了。此时还可以并行开始为进行获取下一个循环拾取姿势的点云处理。查看下面的实施示例。

备注

为了优化速度,重要的是在捕获函数返回之后和调用API获取点云之前移动机器人。

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Matrix4x4 computePickPose(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

    void placeObject()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
        const auto capturePose = Zivid::Matrix4x4{};
        robot.Move(capturePose, std::chrono::milliseconds(2000));

        std::cout << "Capturing frame" << std::endl;
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
        const auto frame = camera.capture(settings);

        std::cout << "Processing to get the first pick pose" << std::endl;
        auto pickPose = computePickPose(frame);

        while(true)
        {
            std::cout << "Picking the new object" << std::endl;
            robot.Move(pickPose, std::chrono::milliseconds(2000));
            std::cout << "Object picked" << std::endl;

            std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
            robot.Move(capturePose, std::chrono::milliseconds(2000));

            // At this moment the robot should be at the capture pose

            std::cout << "Capturing frame" << std::endl;
            const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
            const auto frame = camera.capture(settings);
            std::cout << "Acquisition completed" << std::endl;

            std::cout
                << "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
                << std::endl;
            auto futurePickPose = std::async(std::launch::async, computePickPose, frame);
            auto placingCompleted = std::async(std::launch::async, placeObject);
            pickPose =
                futurePickPose
                    .get(); // This (processing to get the new pick pose) should be faster so the robot would not have to stop and wait
            placingCompleted.get();
            std::cout
                << "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
                << std::endl;
        }
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

小技巧

如果您执行的是2D捕获而不是3D捕获,建议采用相同的策略。但是,如果您同时需要2D和3D数据,那么建议您查看我们的 2D+3D捕获策略 以及 使用同一台相机进行连续拍照的性能限制 (本文最后一部分)。

If the robot motion to place the object is faster than the point cloud processing to get the pick pose for the next cycle, the robot will have to wait.

The alternative to prevent the robot from stopping is that for every cycle, the robot goes for the pick pose that was computed from the previous cycle. This strategy allows you to keep processing the data while the robot is picking an object. Check out our implementation example below. This strategy assumes that it is always possible to estimate multiple pick poses. It also assumes that the object to be picked in the current cycle has not moved during the previous pick cycle. Ensure the object from the current pick cycle is located in a different area of the bin than the object picked in the previous cycle. Such object choice will increase the chances of successful picking.

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Array2D<Zivid::PointXYZColorRGBA> getPointCloudInThread(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        return data;
    }

    void placeObjectInThread()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

    Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Array2D<Zivid::PointXYZColorRGBA> &data)
    {
        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
        const auto capturePose = Zivid::Matrix4x4{};
        robot.Move(capturePose, std::chrono::milliseconds(2000));

        // At this moment the robot should be at the capture pose

        std::cout << "Capturing frame" << std::endl;
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
        const auto frame = camera.capture(settings);
        std::cout << "Acquisition completed" << std::endl;

        std::cout
            << "Starting, at the same time (in two threads), robot motion to place the object and waiting for the point cloud to be processed and copied"
            << std::endl;
        auto futureData = std::async(std::launch::async, getPointCloudInThread, frame);
        auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
        const auto data =
            futureData
                .get(); // This (remaining time to process the point cloud) should be faster so the robot would not have to stop and wait
        placingCompleted.get();
        std::cout << "Both robot motion to place the object and point cloud processing and copying completed"
                  << std::endl;

        std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
                  << std::endl;
        auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, data);

        std::cout
            << "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
            << std::endl;
        const auto previousCyclePickPose = Zivid::Matrix4x4{};
        robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
        std::cout << "Object picked" << std::endl;
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

对于相机是固定安装的拾取和放置应用,我们建议在机器人拾取物体并移出相机的 :abbr:` (视野)` 后进行捕获。使用此策略,您有可能无需停止机器人即可进行相机拍摄,具体取决于以下哪些操作需要更长的执行时间:

  • 总捕获时间以及处理数据以检测对象和估计拾取位姿的时间。

  • 机器人从相机FOV外的位姿运动到放置位姿,然后回到再次进入相机FOV之前的位姿的时间。

If the robot motion lasts longer, the robot will not have to stop and wait. However, if the former (capture + detection + pose estimation) takes longer to execute, the robot will have to stop. Check out our implementation example below.

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Matrix4x4 captureAndProcess(Zivid::Camera &camera)
    {
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };

        const auto frame = camera.capture(settings);
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        const auto pickPose =
            Zivid::Matrix4x4{}; // This is where you should run your processing to get a pick pose from the point cloud

        return pickPose;
    }

    void placeObject()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOV = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOV, std::chrono::milliseconds(1000));
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        auto pickPose = captureAndProcess(camera);

        while(true)
        {
            std::cout << "Picking the new object" << std::endl;
            robot.Move(pickPose, std::chrono::milliseconds(2000));
            std::cout << "Object picked" << std::endl;

            std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
            const auto outOfCameraFOV = Zivid::Matrix4x4{};
            robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));

            // At this moment the robot should be outside the camera FOV

            std::cout
                << "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
                << std::endl;
            auto futurePickPose = std::async(std::launch::async, captureAndProcess, std::ref(camera));
            auto placingCompleted = std::async(std::launch::async, placeObject);
            pickPose =
                futurePickPose
                    .get(); // This (capturing and processing to get the new pick pose) should be faster so the robot would not have to stop and wait
            placingCompleted.get();
            std::cout
                << "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
                << std::endl;
        }
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

The alternative to prevent the robot from stopping is that for every cycle, the robot goes for the pick pose that was computed from the previous cycle. This strategy allows you to keep processing the data while the robot is picking an object. In this case, while placing the object, you can either just capture, or capture, get the point cloud, and copy it to CPU memory. Check out our implementation example below. This strategy assumes that it is always possible to estimate multiple pick poses. It also assumes that the object to be picked in the current cycle has not moved during the previous pick cycle. Ensure the object from the current pick cycle is located in a different area of the bin than the object picked in the previous cycle. Such object choice will increase the chances of successful picking.

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Frame captureInThread(Zivid::Camera &camera)
    {
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };

        const auto frame = camera.capture(settings);

        return frame;
    }

    void placeObjectInThread()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

    Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
        // If you know your robot motion to place the object is slow enough, you can move the above two lines to captureInThread

        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
        const auto outOfCameraFOV = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));

        // At this moment the robot should be outside the camera FOV

        std::cout
            << "Starting, at the same time (in two threads), robot motion to place the object and capturing the point cloud"
            << std::endl;
        auto futureFrame = std::async(std::launch::async, captureInThread, std::ref(camera));
        auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
        const auto frame =
            futureFrame
                .get(); // This (capturing the point cloud) should be faster so the robot would not have to stop and wait
        placingCompleted.get();
        std::cout << "Both robot motion to place the object and capturing the point cloud completed" << std::endl;

        std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
                  << std::endl;
        auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, frame);

        std::cout
            << "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
            << std::endl;
        const auto previousCyclePickPose = Zivid::Matrix4x4{};
        robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
        std::cout << "Object picked" << std::endl;
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

使用同一台相机进行连续拍照的性能限制

Zivid SDK 通过单个专用线程在相机上执行所有操作和函数调用。这意味着在同一台相机上任何正在进行的捕获(2D或3D)的所有处理完成之前,无法开始新的捕获。当使用同一台相机连续调用两个捕获时,这会影响事件的进程。

假设我们进行两次连续的3D捕获。

采集完成后,第一次捕获调用将立即返回。但是,第二次函数调用将阻塞并等待第一次捕获的点云处理完成,然后再触发采集。一旦第二次捕获的采集过程结束,第二次函数调用仍将立即返回。

const auto frame0 = camera.capture(settings);
const auto frame1 = camera.capture(settings);

从第一个 Zivid::Frame 获取点云的API将立即返回,因为到那时该frame的点云处理已经完成。但是,第二个 Zivid::Frame 的相同API将会等待并阻塞,直到完成第二次捕获的点云处理。

const auto pointCloud0 = frame0.pointCloud();
const auto pointCloud1 = frame1.pointCloud();

假设我们依次调用3D捕获和2D捕获。

采集完成后,3D捕获调用将立即返回。但是在这种情况下,2D捕获调用将等待并阻塞,直到3D捕获的点云处理完成。只有这样,2D采集才会在相机上触发。 2D采集完成后,2D捕获函数仍将立即返回。

const auto frame = camera.capture(settings);
const auto frame2D = camera.capture(settings2D);

Zivid::Frame 获取点云的API将立即返回,因为到那时该帧的点云处理已经完成。从 Zivid::Frame2D 对象获取图像的方法将在2D图像在CPU内存中可用后返回,阻塞会持续到处理和复制完成。

const auto pointCloud0 = frame0.pointCloud();
const auto pointCloud1 = frame1.pointCloud();

假设我们依次调用2D捕获和3D捕获。

采集完成后,2D捕获调用将立即返回。但是,3D捕获调用将在触发3D采集之前阻塞并等待,直到2D捕获的处理和复制完成,并且2D图像在CPU内存中可用。一旦3D采集完成,3D捕获函数调用仍将立即返回。

const auto frame2D = camera.capture(settings2D);
const auto frame = camera.capture(settings);

Zivid::Frame2D 对象获取图像的API将立即返回,因为该2D捕获的处理和复制已经完成。但是,3D Zivid::Frame 的API将会等待并阻塞,直到第二次捕获的点云处理完成。

const auto image = frame2D.imageRGBA();
const auto pointCloud = frame.pointCloud();

使用Zivid点云操作函数的优势

Zivid SDK中对点云的相关操作,例如降采样、转换和法线,都是点云仍在GPU内存中时在GPU上进行计算的。这允许用户避免由于在GPU和CPU之间来回移动数据以进行计算而导致的额外时间。通过使用Zivid API在GPU上进行降采样,也可以减少复制到CPU的数据。因此,出于性能的考量,使用Zivid SDK进行这些操作是有益的。使用第三方库实现这些点云操作可能会更加耗时。通常情况下在CPU上进行这些计算要慢得多。如果使用其它软件在GPU上进行同样的的计算,则需要将数据从GPU复制到CPU,然后再复制回来。