Performance Considerations
Reliability in Exposure Time
Zivid cameras queue the exposures from each acquisition back to back when capturing using HDR mode. The camera is done capturing the scene when the projector has finished flashing the patterns. This time is called the Acquisition Time and it can be found in the Information panel in Zivid Studio. Knowing this time allows you to reliably estimate when the capturing of the scene is done and the scene can be changed. This can be, for example, either moving the robot or moving the objects in the scene.
Warning
Be aware that using the Zivid camera at a high duty cycle can trigger thermal throttling to prevent overheating. For more on this topic check out the projector brightness page.
Which frame member functions returns when?
After capture, point cloud processing will still be running in the background.
const auto frame = camera.capture(settings);
var frame = camera.Capture(settings);
frame = camera.capture(settings)
The following Zivid::Frame
member functions do not depend on point cloud processing to be completed and will therefore return right away.
const auto frameSettings = frame.settings();
const auto cameraInfo = frame.cameraInfo();
var frameSettings = frame.Settings;
var cameraInfo = frame.CameraInfo;
frame_settings = frame.settings
camera_info = frame.camera_info
On the other hand, the following APIs will block and wait until the point cloud data is available.
const auto pointCloud = frame.pointCloud();
const auto frameInfo = frame.info();
const auto cameraState = frame.state();
frame.save("Frame.zdf");
var pointCloud = frame.PointCloud;
var frameInfo = frame.Info;
var cameraState = frame.State;
frame.Save("Frame.zdf");
point_cloud = frame.point_cloud()
frame_info = frame.info
camera_state = frame.state
frame.save("Frame.zdf")
Optimizing Robot Cycle Times
For pick and place applications with a robot mounted camera, we recommend capturing after the robot has picked an object. With this strategy, the robot needs to stop only during the acquisition time. As soon as the acquisition is completed, which is when the capture API returns, the robot can start placing the object. In parallel, the point cloud processing to get the pick pose for the next cycle can start. Check out our implementation example below.
Note
To optimize for speed it is important to move the robot after the capture function has returned and before calling the API to get the point cloud.
#include <Zivid/Zivid.h>
#include <future>
#include <iostream>
namespace
{
struct DummyRobot
{
void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
{
std::this_thread::sleep_for(duration);
}
};
DummyRobot robot;
Zivid::Matrix4x4 computePickPose(const Zivid::Frame &frame)
{
const auto pointCloud = frame.pointCloud();
const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
// This is where you should run your processing to get a pick pose from the point cloud
return Zivid::Matrix4x4{};
}
void placeObject()
{
const auto placePose = Zivid::Matrix4x4{};
robot.Move(placePose, std::chrono::milliseconds(2000));
const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
}
} // namespace
int main()
{
try
{
Zivid::Application zivid;
auto camera = zivid.connectCamera();
std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
const auto capturePose = Zivid::Matrix4x4{};
robot.Move(capturePose, std::chrono::milliseconds(2000));
std::cout << "Capturing frame" << std::endl;
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
const auto frame = camera.capture(settings);
std::cout << "Processing to get the first pick pose" << std::endl;
auto pickPose = computePickPose(frame);
while(true)
{
std::cout << "Picking the new object" << std::endl;
robot.Move(pickPose, std::chrono::milliseconds(2000));
std::cout << "Object picked" << std::endl;
std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
robot.Move(capturePose, std::chrono::milliseconds(2000));
// At this moment the robot should be at the capture pose
std::cout << "Capturing frame" << std::endl;
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
const auto frame = camera.capture(settings);
std::cout << "Acquisition completed" << std::endl;
std::cout
<< "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
<< std::endl;
auto futurePickPose = std::async(std::launch::async, computePickPose, frame);
auto placingCompleted = std::async(std::launch::async, placeObject);
pickPose =
futurePickPose
.get(); // This (processing to get the new pick pose) should be faster so the robot would not have to stop and wait
placingCompleted.get();
std::cout
<< "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
<< std::endl;
}
}
catch(const std::exception &e)
{
std::cerr << "Error: " << Zivid::toString(e) << std::endl;
return EXIT_FAILURE;
}
}
Tip
The same strategy is recommended if you are performing a 2D capture instead of a 3D capture. However, if you need both 2D and 3D, then you should check out our 2D + 3D Capture Strategy.
If the robot motion to place the object is faster than the point cloud processing to get the pick pose for the next cycle, the robot will have to wait.
The alternative to prevent the robot from stopping is that for every cycle, the robot goes for the pick pose that was computed from the previous cycle. This strategy allows you to keep processing the data while the robot is picking an object. Check out our implementation example below. This strategy assumes that it is always possible to estimate multiple pick poses. It also assumes that the object to be picked in the current cycle has not moved during the previous pick cycle. Ensure the object from the current pick cycle is located in a different area of the bin than the object picked in the previous cycle. Such object choice will increase the chances of successful picking.
#include <Zivid/Zivid.h>
#include <future>
#include <iostream>
namespace
{
struct DummyRobot
{
void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
{
std::this_thread::sleep_for(duration);
}
};
DummyRobot robot;
Zivid::Array2D<Zivid::PointXYZColorRGBA> getPointCloudInThread(const Zivid::Frame &frame)
{
const auto pointCloud = frame.pointCloud();
const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
return data;
}
void placeObjectInThread()
{
const auto placePose = Zivid::Matrix4x4{};
robot.Move(placePose, std::chrono::milliseconds(2000));
const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
}
Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Array2D<Zivid::PointXYZColorRGBA> &data)
{
// This is where you should run your processing to get a pick pose from the point cloud
return Zivid::Matrix4x4{};
}
} // namespace
int main()
{
try
{
Zivid::Application zivid;
auto camera = zivid.connectCamera();
std::cout << "Moving the robot mounted camera to the capture pose" << std::endl;
const auto capturePose = Zivid::Matrix4x4{};
robot.Move(capturePose, std::chrono::milliseconds(2000));
// At this moment the robot should be at the capture pose
std::cout << "Capturing frame" << std::endl;
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
const auto frame = camera.capture(settings);
std::cout << "Acquisition completed" << std::endl;
std::cout
<< "Starting, at the same time (in two threads), robot motion to place the object and waiting for the point cloud to be processed and copied"
<< std::endl;
auto futureData = std::async(std::launch::async, getPointCloudInThread, frame);
auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
const auto data =
futureData
.get(); // This (remaining time to process the point cloud) should be faster so the robot would not have to stop and wait
placingCompleted.get();
std::cout << "Both robot motion to place the object and point cloud processing and copying completed"
<< std::endl;
std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
<< std::endl;
auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, data);
std::cout
<< "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
<< std::endl;
const auto previousCyclePickPose = Zivid::Matrix4x4{};
robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
std::cout << "Object picked" << std::endl;
}
catch(const std::exception &e)
{
std::cerr << "Error: " << Zivid::toString(e) << std::endl;
return EXIT_FAILURE;
}
}
For pick and place applications with a stationary mounted camera, we recommend capturing after the robot has picked an object and moved outside the FOV of the camera. With this strategy, there is a chance you do not need to stop the robot for the camera to capture, depending on which of the following operations takes longer time to execute:
The total capture time together with the time to process the data to detect the object and estimate the pick pose.
Robot motion from the pose where it is outside the FOV of the camera, to the place pose, and back to the pose before it enters the FOV of the camera again.
If the robot motion lasts longer, the robot will not have to stop and wait. However, if the former (capture + detection + pose estimation) takes longer to execute, the robot will have to stop. Check out our implementation example below.
#include <Zivid/Zivid.h>
#include <future>
#include <iostream>
namespace
{
struct DummyRobot
{
void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
{
std::this_thread::sleep_for(duration);
}
};
DummyRobot robot;
Zivid::Matrix4x4 captureAndProcess(Zivid::Camera &camera)
{
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
const auto frame = camera.capture(settings);
const auto pointCloud = frame.pointCloud();
const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
const auto pickPose =
Zivid::Matrix4x4{}; // This is where you should run your processing to get a pick pose from the point cloud
return pickPose;
}
void placeObject()
{
const auto placePose = Zivid::Matrix4x4{};
robot.Move(placePose, std::chrono::milliseconds(2000));
const auto outOfCameraFOV = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOV, std::chrono::milliseconds(1000));
}
} // namespace
int main()
{
try
{
Zivid::Application zivid;
auto camera = zivid.connectCamera();
auto pickPose = captureAndProcess(camera);
while(true)
{
std::cout << "Picking the new object" << std::endl;
robot.Move(pickPose, std::chrono::milliseconds(2000));
std::cout << "Object picked" << std::endl;
std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
const auto outOfCameraFOV = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));
// At this moment the robot should be outside the camera FOV
std::cout
<< "Starting, at the same time (in two threads), robot motion to place the object and capturing and processing to get the new pick pose"
<< std::endl;
auto futurePickPose = std::async(std::launch::async, captureAndProcess, std::ref(camera));
auto placingCompleted = std::async(std::launch::async, placeObject);
pickPose =
futurePickPose
.get(); // This (capturing and processing to get the new pick pose) should be faster so the robot would not have to stop and wait
placingCompleted.get();
std::cout
<< "Both robot motion to place the object and capturing and processing to get the new pick pose completed"
<< std::endl;
}
}
catch(const std::exception &e)
{
std::cerr << "Error: " << Zivid::toString(e) << std::endl;
return EXIT_FAILURE;
}
}
The alternative to prevent the robot from stopping is that for every cycle, the robot goes for the pick pose that was computed from the previous cycle. This strategy allows you to keep processing the data while the robot is picking an object. In this case, while placing the object, you can either just capture, or capture, get the point cloud, and copy it to CPU memory. Check out our implementation example below. This strategy assumes that it is always possible to estimate multiple pick poses. It also assumes that the object to be picked in the current cycle has not moved during the previous pick cycle. Ensure the object from the current pick cycle is located in a different area of the bin than the object picked in the previous cycle. Such object choice will increase the chances of successful picking.
#include <Zivid/Zivid.h>
#include <future>
#include <iostream>
namespace
{
struct DummyRobot
{
void Move(Zivid::Matrix4x4 pose, std::chrono::milliseconds duration)
{
std::this_thread::sleep_for(duration);
}
};
DummyRobot robot;
Zivid::Frame captureInThread(Zivid::Camera &camera)
{
const auto settings = Zivid::Settings{ Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} } };
const auto frame = camera.capture(settings);
return frame;
}
void placeObjectInThread()
{
const auto placePose = Zivid::Matrix4x4{};
robot.Move(placePose, std::chrono::milliseconds(2000));
const auto outOfCameraFOVPose = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOVPose, std::chrono::milliseconds(1000));
}
Zivid::Matrix4x4 computePickPoseInThread(const Zivid::Frame &frame)
{
const auto pointCloud = frame.pointCloud();
const auto data = pointCloud.copyData<Zivid::PointXYZColorRGBA>();
// If you know your robot motion to place the object is slow enough, you can move the above two lines to captureInThread
// This is where you should run your processing to get a pick pose from the point cloud
return Zivid::Matrix4x4{};
}
} // namespace
int main()
{
try
{
Zivid::Application zivid;
auto camera = zivid.connectCamera();
std::cout << "Moving the robot with a picked object outside the FOV of the stationary camera" << std::endl;
const auto outOfCameraFOV = Zivid::Matrix4x4{};
robot.Move(outOfCameraFOV, std::chrono::milliseconds(2000));
// At this moment the robot should be outside the camera FOV
std::cout
<< "Starting, at the same time (in two threads), robot motion to place the object and capturing the point cloud"
<< std::endl;
auto futureFrame = std::async(std::launch::async, captureInThread, std::ref(camera));
auto placingCompleted = std::async(std::launch::async, placeObjectInThread);
const auto frame =
futureFrame
.get(); // This (capturing the point cloud) should be faster so the robot would not have to stop and wait
placingCompleted.get();
std::cout << "Both robot motion to place the object and capturing the point cloud completed" << std::endl;
std::cout << "Starting the processing (in a separate thread) to get the pick pose for the next cycle"
<< std::endl;
auto nextPickPose = std::async(std::launch::async, computePickPoseInThread, frame);
std::cout
<< "Picking the object from the previous pick cycle (becasue computing the pick new pose is not completed yet at this moment)"
<< std::endl;
const auto previousCyclePickPose = Zivid::Matrix4x4{};
robot.Move(previousCyclePickPose, std::chrono::milliseconds(2000));
std::cout << "Object picked" << std::endl;
}
catch(const std::exception &e)
{
std::cerr << "Error: " << Zivid::toString(e) << std::endl;
return EXIT_FAILURE;
}
}
Benefits of using Zivid Point Cloud Manipulation Functions
Operations on the point cloud in Zivid SDK, e.g., Downsample, Transform, and Normals, are computed on the GPU while the point cloud is still in GPU memory. This allows the user to avoid the extra time of moving the data back and forth between GPU and CPU for computation. By downsampling on the GPU with Zivid API, there is also less data to copy to the CPU. Therefore, it is beneficial to use Zivid SDK for these operations for performance reasons. Implementing these point cloud operations with a third-party library can be more time-consuming. On average, if the computations are performed on the CPU they will be slower. If the same computations are done by other software on the GPU it would require the data to be copied from GPU to the CPU and back again.
Version History
SDK |
Changes |
---|---|
2.12.0 |
Zivid One+ has reached its End-of-Life and is no longer supported. |
2.11.0 |
2 and 2+ now support concurrent processing and acquisition, and switching between capture modes has been optimized. |