性能注意事项

曝光时间的可靠性

当使用HDR模式捕获时,Zivid 相机会将每次采集的曝光连续排列。当投影仪完成投影图案时,相机完成对场景的捕获。这个时间称为Acquisition Time(采集时间),可以在 Zivid Studio 中的信息面板里找到。了解该时间可以让您可靠地估计何时完成场景的捕获以及何时可以更改场景。例如,这可以是移动机器人或移动场景中的对象。

估计采集时间

如果您愿意,可以采用纸笔方法来估算采集时间。

例如,如果您使用由 \(10 ms\) 曝光时间和 \(5 ms\) 曝光时间组成的多采集 HDR 捕获,则可以使用以下公式进行计算。这里假设您使用的是 phase engine 进行 3D 捕获。Phase engine 每次曝光包含了 14 幅图案(其中包括一张可用于颜色数据捕获但可禁用的图像)。

\[((10 ms + 3 ms_{overhead}) * 14) + ((5ms + 3 ms_{overhead}) * 14) = 294 ms\]

备注

对于不同的采集之间的光圈的变化,在毫秒级范围内有额外的开销。

机器人可以在 \(294 ms\) 后移动。数字 14 表示phase engine中的图案数量。使用下表确定采集的图案数量:

投影图案的数量

Pixel::Color::rgb

Pixel::Color::disabled

Phase

14

13

Stripe

27

26

Omni

43

42

投影图案的数量

Pixel::Color::rgb

Pixel::Color::disabled

Phase

14

13

Stripe

33

32

3 毫秒的开销是一个近似值,取决于相机型号、采样模式和 ROI。因此,我们建议使用 Zivid Studio 来确定特定设置的采集时间。

警告

请注意,在高占空比下使用Zivid相机会触发热节流功能以防止相机过热。有关此主题的更多信息,请查看 投影仪亮度 页面。

哪些frame成员函数在什么时候返回?

捕获后,点云处理仍将在后台运行。

const auto frame = camera.capture(settings);

以下 Zivid::Frame 成员函数不依赖于需要完成的点云处理,因此将立即返回。

const auto frameSettings = frame.settings();
const auto cameraInfo = frame.cameraInfo();

另一方面,以下API将阻塞并等待点云数据可用。

const auto pointCloud = frame.pointCloud();
const auto frameInfo = frame.info();
const auto cameraState = frame.state();
frame.save("Frame.zdf");

优化机器人周期时间

对于相机安装在机器人上的拾取和放置应用,我们建议在机器人拾取物体后进行捕获。如果采用此策略,机器人只需要在相机采集期间停止运动。一旦采集完成,即捕获(capture)API返回时,机器人就可以开始放置对象了。此时还可以并行开始为进行获取下一个循环拾取位姿的点云处理。查看下面的实施示例。

备注

为了优化速度,重要的是在捕获函数返回之后和调用API获取点云之前移动机器人。

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Matrix4x4 computePickPose(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

    void placeObject()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
        const auto capturePose = Zivid::Matrix4x4{};
        robot.Move(capturePose, std::chrono::milliseconds(2000));

        std::cout << "Capturing frame" << std::endl;
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
        const auto frame = camera.capture(settings);

        std::cout << "Processing to get the first pick pose" << std::endl;
        auto pickPose = computePickPose(frame);

        while(true)
        {
            std::cout << "Picking the new object" << std::endl;
            robot.Move(pickPose, std::chrono::milliseconds(2000));
            std::cout << "Object picked" << std::endl;

            std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
            robot.Move(capturePose, std::chrono::milliseconds(2000));

            // At this moment the robot should be at the capture pose

            std::cout << "Capturing frame" << std::endl;
            const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
            const auto frame = camera.capture(settings);
            std::cout << "Acquisition completed" << std::endl;

            std::cout
                << "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
                << std::endl;
            auto futurePickPose = std::async(std::launch::async, computePickPose, frame);
            auto placingCompleted = std::async(std::launch::async, placeObject);
            pickPose =
                futurePickPose
                    .get(); // This (processing to get the new pick pose) should be faster so the robot would not have to stop and wait
            placingCompleted.get();
            std::cout
                << "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
                << std::endl;
        }
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

小技巧

如果您执行的是2D捕获而不是3D捕获,建议采用相同的策略。但是,如果您同时需要2D和3D数据,那么建议您查看我们的 2D+3D捕获策略

如果放置物体的机器人运动快于获取下一个循环拾取位姿的点云处理过程,则机器人将不得不等待。

防止机器人停止的替代方法是,对于每个循环,机器人都采用从前一个循环计算出的拾取位姿。此策略允许您在机器人拾取对象时继续处理数据。查看下面的实施示例。该策略假设始终可以估计多个拾取位姿。它还假定当前循环中要拾取的对象在上一个拾取循环中没有移动。确保当前拾取周期中的对象与上一个循环中拾取的对象位于容器的不同区域。这样的对象选择将增加成功拾取的机会。

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Array2D<Zivid::PointXYZColorRGBA> getPointCloudInThread(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        return data;
    }

    void placeObjectInThread()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

    Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Array2D<Zivid::PointXYZColorRGBA> &data)
    {
        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
        const auto capturePose = Zivid::Matrix4x4{};
        robot.Move(capturePose, std::chrono::milliseconds(2000));

        // At this moment the robot should be at the capture pose

        std::cout << "Capturing frame" << std::endl;
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
        const auto frame = camera.capture(settings);
        std::cout << "Acquisition completed" << std::endl;

        std::cout
            << "Starting, at the same time (in two threads), robot motion to place the object and waiting for the point cloud to be processed and copied"
            << std::endl;
        auto futureData = std::async(std::launch::async, getPointCloudInThread, frame);
        auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
        const auto data =
            futureData
                .get(); // This (remaining time to process the point cloud) should be faster so the robot would not have to stop and wait
        placingCompleted.get();
        std::cout << "Both robot motion to place the object and point cloud processing and copying completed"
                  << std::endl;

        std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
                  << std::endl;
        auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, data);

        std::cout
            << "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
            << std::endl;
        const auto previousCyclePickPose = Zivid::Matrix4x4{};
        robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
        std::cout << "Object picked" << std::endl;
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

对于固定安装相机的拾取和放置应用,我们建议在机器人拾取物体并移动到相机的 FOV 之外后进行捕获。通过这种策略,您有可能不需要停止机器人来让相机进行捕获,具体取决于以下哪个操作需要更长的时间来执行:

  • 总捕获时间以及处理数据以检测对象和估计拾取位姿的时间。

  • 机器人从相机FOV外的位姿运动到放置位姿,然后回到再次进入相机FOV之前的位姿的时间。

如果机器人运动持续的时间更长,则机器人将不必停下来等待。但是,如果前者(捕获+检测+位姿估计)执行时间更长,机器人将不得不停止。查看下面的实施示例。

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Matrix4x4 captureAndProcess(Zivid::Camera &camera)
    {
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };

        const auto frame = camera.capture(settings);
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();

        const auto pickPose =
            Zivid::Matrix4x4{}; // This is where you should run your processing to get a pick pose from the point cloud

        return pickPose;
    }

    void placeObject()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOV = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOV, std::chrono::milliseconds(1000));
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        auto pickPose = captureAndProcess(camera);

        while(true)
        {
            std::cout << "Picking the new object" << std::endl;
            robot.Move(pickPose, std::chrono::milliseconds(2000));
            std::cout << "Object picked" << std::endl;

            std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
            const auto outOfCameraFOV = Zivid::Matrix4x4{};
            robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));

            // At this moment the robot should be outside the camera FOV

            std::cout
                << "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
                << std::endl;
            auto futurePickPose = std::async(std::launch::async, captureAndProcess, std::ref(camera));
            auto placingCompleted = std::async(std::launch::async, placeObject);
            pickPose =
                futurePickPose
                    .get(); // This (capturing and processing to get the new pick pose) should be faster so the robot would not have to stop and wait
            placingCompleted.get();
            std::cout
                << "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
                << std::endl;
        }
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

防止机器人停止的替代方法是,对于每个循环,机器人都采用从前一个循环计算出的拾取位姿。此策略允许您在机器人拾取对象时继续处理数据。在这种情况下,在放置对象时,您可以仅进行捕获图像,或者捕获图像并获取点云,以及将其复制到CPU内存中。查看下面的实施示例。该策略假设始终可以估计多个拾取位姿。它还假定当前循环中要拾取的对象在上一个拾取循环中没有移动。确保当前拾取周期中的对象与上一个循环中拾取的对象位于容器的不同区域。这样的对象选择将增加成功拾取的机会。

#include <Zivid/Zivid.h>

#include <future>
#include <iostream>

namespace
{
    struct DummyRobot
    {
        void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
        {
            std::this_thread::sleep_for(duration);
        }
    };

    DummyRobot robot;

    Zivid::Frame captureInThread(Zivid::Camera &camera)
    {
        const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };

        const auto frame = camera.capture(settings);

        return frame;
    }

    void placeObjectInThread()
    {
        const auto placePose = Zivid::Matrix4x4{};
        robot.Move(placePose, std::chrono::milliseconds(2000));

        const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
    }

    Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Frame &frame)
    {
        const auto pointCloud = frame.pointCloud();
        const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
        // If you know your robot motion to place the object is slow enough, you can move the above two lines to captureInThread

        // This is where you should run your processing to get a pick pose from the point cloud
        return Zivid::Matrix4x4{};
    }

} // namespace

int main()
{
    try
    {
        Zivid::Application zivid;
        auto camera = zivid.connectCamera();

        std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
        const auto outOfCameraFOV = Zivid::Matrix4x4{};
        robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));

        // At this moment the robot should be outside the camera FOV

        std::cout
            << "Starting, at the same time (in two threads), robot motion to place the object and capturing the point cloud"
            << std::endl;
        auto futureFrame = std::async(std::launch::async, captureInThread, std::ref(camera));
        auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
        const auto frame =
            futureFrame
                .get(); // This (capturing the point cloud) should be faster so the robot would not have to stop and wait
        placingCompleted.get();
        std::cout << "Both robot motion to place the object and capturing the point cloud completed" << std::endl;

        std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
                  << std::endl;
        auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, frame);

        std::cout
            << "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
            << std::endl;
        const auto previousCyclePickPose = Zivid::Matrix4x4{};
        robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
        std::cout << "Object picked" << std::endl;
    }
    catch(const std::exception &e)
    {
        std::cerr << "Error: " << Zivid::toString(e) << std::endl;
        return EXIT_FAILURE;
    }
}

使用Zivid点云操作函数的优势

Zivid SDK中对点云的相关操作,例如降采样、转换和法线,都是点云仍在GPU内存中时在GPU上进行计算的。这允许用户避免由于在GPU和CPU之间来回移动数据以进行计算而导致的额外时间。通过使用Zivid API在GPU上进行降采样,也可以减少复制到CPU的数据。因此,出于性能的考量,使用Zivid SDK进行这些操作是有益的。使用第三方库实现这些点云操作可能会更加耗时。通常情况下在CPU上进行这些计算要慢得多。如果使用其它软件在GPU上进行同样的的计算,则需要将数据从GPU复制到CPU,然后再复制回来。

版本历史

SDK

变更

2.12.0

Zivid One+ 已达到其生命周期的终点并且不再受支持。

2.11.0

Zivid 2 和 2+ 现在支持并发处理和采集,并且捕获模式之间的切换已得到优化。