Robots face limitations in performing human-like manipulation tasks such as grasping, assembling, and handling objects due to the absence of a structured and adaptable motion framework. Existing methods rely on task-specific motion models, which fail to generalize across tasks or adapt to unfamiliar objects, restricting automation in dynamic sectors like manufacturing, healthcare, and service robotics.
The proposed Motion Taxonomy for Manipulation Embedding and Recognition introduces a hierarchical and reusable classification of manipulation motions embedded into learning frameworks. Unlike conventional approaches that model motions as isolated, rigid actions, this technology captures motion primitives and their compositional structures, allowing robots to recognize, adapt, and reuse learned motions across a wide range of tasks and environments. The technology has been practically validated using motion capture systems to record human manipulations, robotic arms to perform the motions, and real-world objects like blocks, tools, and household items. Results demonstrated improved motion recognition accuracy, rapid learning of unseen tasks, and adaptability under changing conditions. This innovation stands out by enabling motion transferability, reusability, and generalization, reducing programming effort and unlocking autonomous, scalable, and intelligent robotic systems beyond the capabilities of existing solutions.
Motion capture to robot execution via motion taxonomy.