Posted in | News | Consumer Robotics

Robots Learn Body Shape and Movement Through Self-Observation

According to a recent study by Columbia Engineering researchers published in Nature Machine Intelligence, robots can learn how to move and the shape of their bodies by using a camera to record their own movements.

robot observes its reflection in a mirror,
A robot observes its reflection in a mirror, learning its own morphology and kinematics for autonomous self-simulation. The process highlights the intersection of vision-based learning and robotics, where the robot refines its movements and predicts its spatial motion through self-observation. Image Credit: Jane Nisselson/Columbia Engineering

With this information, the robots were able to organize their own activities and recover from physical harm.

Like humans learning to dance by watching their mirror reflection, robots now use raw video to build kinematic self-awareness. Our goal is a robot that understands its own body, adapts to damage, and learns new skills without constant human programming.

Yuhang Hu, Study Lead Author and Doctoral Student, Creative Machines Lab, Columbia University

Most robots use simulations to learn how to move. Robots are released into the real world to continue learning after they are able to maneuver in these virtual environments.

The better and more realistic the simulator, the easier it is for the robot to make the leap from simulation into reality.

Hod Lipson, Professor, Columbia University

However, developing a good simulator is a difficult task that usually calls for experts with specialized knowledge. By using a camera to observe its own motion, the researchers were able to teach a robot how to make a simulator of itself.

Lipson added, “This ability not only saves engineering effort, but also allows the simulation to continue and evolve with the robot as it undergoes wear, damage, and adaptation.

Instead, using a single standard 2D camera, the researchers in the current work devised a method for robots to autonomously model their own 3D designs. Three deep neural network AI systems that replicate the brain were the driving force behind this innovation.

These allowed the robot to comprehend and adjust to its own movements by inferring 3D motion from 2D footage. Additionally, the new system could detect changes to the robots' bodies, such as a bend in an arm, and assist them in modifying their movements to repair the damage that was represented.

This kind of flexibility could be helpful in many real-world situations.

Hu added, “Imagine a robot vacuum or a personal assistant bot that notices its arm is bent after bumping into furniture. Instead of breaking down or needing repair, it watches itself, adjusts how it moves, and keeps working. This could make home robots more reliable—no constant reprogramming required.

In another example, a robot arm at a car factory can be knocked out of position.

Instead of halting production, it could watch itself, tweak its movements, and get back to welding—cutting downtime and costs. This adaptability could make manufacturing more resilient,” Hu noted.

Robots must be more resilient as humans delegate increasingly important tasks to them, such as manufacturing and healthcare.

We humans cannot afford to constantly baby these robots, repair broken parts and adjust performance. Robots need to learn to take care of themselves, if they are going to become truly useful. That is why self-modeling is so important,” Lipson added.

Robots are learning to become more adept at self-modeling using cameras and other sensors, and the capability shown in this study is the most recent in a succession of experiments the Columbia team has published over the past 20 years.

In 2006, the research team’s robots were able to use observations to only create simple stick-figure-like simulations of themselves. About a decade ago, robots began creating higher fidelity models using multiple cameras.

In this study, the robot was able to create a comprehensive kinematic model of itself using just a short video clip from a single regular camera, akin to looking in the mirror. The researchers call this newfound ability “Kinematic Self-Awareness.”

Lipson explained, “We humans are intuitively aware of our body; we can imagine ourselves in the future and visualize the consequences of our actions well before we perform those actions in reality. Ultimately, we would like to imbue robots with a similar ability to imagine themselves, because once you can imagine yourself in the future, there is no limit to what you can do.

Teaching Robots to Build Simulations of Themselves

Video Credit: Creative Machines Lab/Columbia Engineering

Journal Reference:

Hu, Y. et. al. (2025) Teaching robots to build simulations of themselves. Nature Machine Intelligence. doi.org/10.1038/s42256-025-01006-w

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.