From motor control to embodied intelligence
In recent years, researchers have been exploring ways to teach robots to “learn” movement skills through the use of artificial intelligence (AI) techniques such as reinforcement learning (RL). One such approach involves training a neural probabilistic motor primitive (NPMP) using a combination of tracked MoCap data and simulated human odos (motor intention) signals. This NPMP can then be used to guide the learning of movement skills in robots, enabling efficient exploitation of complex tasks by embodiing intelligence.
The NPMP’s primary advantage is that it allows for efficient exploitation of complex tasks by embodiing intelligence. By imitating short-horizon motor intention data, the NPMP can imitate motion trails from humans and animals performing motion, even with randomly sampled trajectories, resulting in highly coordinated behavior. The result is that teams of players in football games progress to coordinateed play, where they exhibit both agile high-frequency motor control and long-term decision-making for the purpose of achieving coordinated team play.
In addition to these robotic applications, this NPMP enables embodied agents to learn more naturalistic behavior than unsructured trial and error. This is achieved through using motion capture data as a source of prior knowledge, biasing learning toward those behaviors that are more naturalistic for the task being learned. This can enable robots to learn more safely, efficiently, and with stable behaviors suitable for real-world robotics.
Overall, this NPMP is a significant advancement in the field of AI research. By using RL to guide the learning of movement skills, it enables robots to achieve faster and more efficient learning compared to traditional methods. This, in turn, provides a foundation for more effective applications in fields such as sports analytics, robotics, and simulation.