Application Requirements

Bin picking finds application in various factory automation processes within the manufacturing industry. The workpieces in bin picking come in different sizes, materials, and surface characteristics. Object type and post-pick actions determine the required point cloud quality. Accurate picking and optimized motion planning without collisions are crucial for handling these objects. Moreover, the pick and place motion must be completed within the allocated cycle time. To delve deeper, we’ve categorized the application requirements into the following sections:

Point cloud quality

In semi-structured bin picking, objects can be either stacked, singulated, or separated by layers. The next level of complexity occurs with bulk objects, known as random bin picking, which generally requires higher-quality point clouds. Bin picking applications frequently involve strict picking accuracy requirements. The accuracy requirements are less demanding if the objective involves merely dropping or placing objects onto a conveyor belt, tray, or box. However, high picking accuracy becomes essential if the intention is to load the workpieces into, e.g., a CNC machine. Developing a successful bin-picking solution can thus necessitate accurately placing objects to enable the subsequent process, such as measurement, stamping operation, or other item manipulation.

../../../_images/bin-picking-3.png

Cycle times

In bin picking, robot cycles generally range from 10 to 15 seconds, occasionally reaching a minimum of 5 seconds per cycle. Time budgets for bin picking are generally not strict since the robot cell is typically not the bottleneck in the manufacturing line. Thus, the camera time budget typically ranges from 700 ms to 1500 ms.

Detection and pose estimation algorithms

The primary objective in achieving successful object picking is accurately estimating a picking pose for the robot. The algorithms that are typically utilized predominantly rely on 3D data. In some instances, 2D data is also used; however, its primary purpose is visualization and sanity checks for human operators. When 2D data is used, it is commonly employed for object segmentation or recognition techniques, typically for basic geometric shapes like cylinders. To effectively utilize 2D data, minimizing defocus, blooming, and saturation artifacts is important. To learn more about this topic, check out Optimizing Color Image and 2D + 3D Capture Strategy. In contrast, 3D data is leveraged for tasks such as object segmentation, detection, and pose estimation, often employing techniques like CAD model matching or geometry matching. Ensuring that false and missing points are mitigated becomes important when utilizing 3D data. Additionally, having distinct 3D edges in point clouds facilitates successful localization techniques such as 3D template matching.

Object shape and surface

Let us focus on specific object types and scenes commonly encountered in bin picking. We have thoughtfully selected the categories of the scenes, as each presents unique challenges. Accordingly, we have compiled a list of object features vital to preserve in the point cloud for each scene. The following outlines these typical challenging scenes frequently encountered in bin picking.

Objects and scenes with characteristic shapes and geometry

Thin and overlapping objects

Features:

  • Shape

  • Edges

  • Depth differences

  • Spatial resolution

Preserving 2D and 3D edges is crucial for thin and overlapping objects like sheet and plate metal. Sharp edges enable clear depth differentiation, making it easy to identify object boundaries. Sufficient spatial resolution plays a key role in having well-represented 3D edges in point clouds. Furthermore, maintaining the integrity of the 3D shape assumes vital importance in tasks such as fitting of geometric primitives (e.g., plane), which entails identifying optimal pick surfaces using 3D data. Algorithms commonly used in bin picking, like 3D template matching, rely on distinctive shape and edge characteristics for accurate recognition and localization, particularly in partially occluded scenarios.

Cylindrical objects

Features:

  • Shape

  • Surface coverage

Good surface coverage and accurate representation of cylindricity are essential for the successful detection and pose estimation of cylinders. Optimal cylindricity, characterized by the correct radius, ensures an accurate determination of picking poses. Increasing the surface coverage in the direction away from the main axis enhances the resemblance of point clouds to cylindrical shapes rather than flat surfaces, resulting in improved pose estimation performance. Therefore, having preserved surface coverage is necessary, especially for thin cylinders.

Tiny objects and objects with fine details / small features

Features:

  • Shape

  • Edges

  • Depth differences

  • Spatial resolution

Many object detection and pose estimation algorithms rely on distinctive shape and edge characteristics. Spatial resolution and depth differences are critical in properly representing fine details in the point cloud. Hence, sharp edges and well-preserved shapes are essential, specifically when dealing with small objects and those with intricate features. Objects such as shafts and axles that exhibit non-symmetrical attributes often necessitate accurate identification of these minute details to determine their orientation accurately.

Objects and scenes with surfaces facing each other

Features:

  • Shape

  • Surface coverage

Objects and scenes with surfaces facing each other present a unique challenge due to the susceptibility to reflection artifacts. This situation arises in various cases, for example, with L-profiles with opposing surfaces or when two flat objects in a bin are positioned with their surfaces facing each other. Flat parts facing the bin wall can also pose imaging difficulties. Therefore, to overcome the challenges mentioned above and achieve successful localization and pose estimation, it is crucial to preserve the 3D shape of the objects while ensuring sufficient surface coverage.

Specular and dark objects

Features:

  • Shape

  • Surface coverage

There are no specific requirements in terms of point cloud quality for detecting specular objects compared to diffuse objects. However, due to their mirror-like reflection, specular scenes can be challenging, resulting in various reflection artifacts. Dark objects, on the other hand, demand more light to be visible by the camera. Finally, objects that are both dark and reflective pose extra challenges for imaging. Therefore, for these scenes, it is especially important to preserve the 3D shape of the objects with the surface coverage as continuous as possible.

Gripper compliance

The quality of the point cloud is often a determining factor for the type of gripper employed. For example, if the point cloud data is highly accurate, a mechanical gripper with narrow tolerances can be utilized. Otherwise, a suction cup, which has more compliance, might be necessary. When the application allows for it, gripper compliance is introduced in bin picking to increase confidence in the success rate of a pick. This is because gripper compliance minimizes the chance of not reaching or crashing into objects and destroying them or the gripper. However, grippers in bin picking applications often don’t have much compliance, especially when accurate placing and placing is required. Dimension trueness, point precision, and planarity are some factors determining the level of compliance one needs in the gripper. Therefore, cameras for bin picking are often required to be accurate.

Motion planning and collision avoidance

An additional element to consider in bin picking is motion planning and collision avoidance. Motion planning is used to optimize the robot’s trajectories while picking, thus, saving cycle time. It is often paired with collision avoidance to avoid crashing into obstacles like bin walls, objects not currently being picked, and other environmental restrictions. The obstacles seen by the vision system are then avoided by the robot. In an ideal world, the vision system would have an exact overlapping representation of the environment as it is. However, artifacts can arise. These artifacts comprise false or missing data that do not align with the real world. False data are, for instance, seen as ghost planes or floating blobs that do not exist in reality, whereas missing data are seen as holes in the point cloud. The latter is a result of incomplete surface coverage and comprises data that should have existed in the point cloud. Due to artifacts, collision avoidance may hinder the robot from reaching its destination. Hence, motion planning needs to define which obstacles are safe to disregard and which are not. With the increased quality of 3D data from the camera, i.e., clean point clouds, the complexity of gripper compliance and motion planning with collision avoidance can be reduced.

This section has reviewed the requirements for piece picking. Now, the next step is to select the correct Zivid camera based on your scene volume.