Traditionally, robotic systems have been designed to grasp and move objects one at a time, a method that is simple to implement but inherently inefficient when compared to human workers, who can intuitively grasp and transfer multiple objects simultaneously. Most existing systems rely heavily on vision-based perception, which can be hampered by occlusion, variable object poses, and overlapping items, leading to frequent errors in object detection and grasp planning. Additionally, single-object grasping strategies are not easily scalable to multi-object scenarios, as they often require complex reprogramming and lack the adaptability needed for diverse object types and unpredictable environments.
Our researchers have developed an advanced robotic system designed for efficient grasping and transferring of multiple objects, closely emulating human-like dexterity in tasks such as bin-picking and sorting. It integrates a multi-fingered robotic hand equipped with tactile and torque sensors, mounted on a robotic arm, and coordinated by sophisticated control circuitry. This technology is highly differentiated by its holistic integration of stochastic sampling, deep learning, and reinforcement learning within a unified robotic manipulation framework. This comprehensive approach not only increases operational efficiency but also provides a scalable solution adaptable to various industrial and logistics applications where bulk object manipulation is required.
An example of multi-object grasping and transferring of 10 tomatoes from the blue bin to the yellow bin