Researchers at Cornell University have developed a new artificial intelligence framework that enables robots to learn tasks by watching just one how-to video. The system, called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution), represents a major step toward making robotic learning more flexible, efficient, and accessible.
Traditionally, training robots to perform even simple actions has required painstakingly detailed instructions and large datasets. Robots often struggle when unexpected variables arise—like dropping an object or encountering an unfamiliar setting. RHyME aims to overcome these limitations by allowing robots to learn through observation, much like humans do.
"One of the frustrating parts of working with robots is having to gather so much data for them to learn different tasks," said Kushal Kedia, a doctoral student in computer science and lead author of the study. "Humans don’t need that—we learn by watching others."
Kedia will present the research, titled One-Shot Imitation under Mismatched Execution, at the IEEE International Conference on Robotics and Automation in Atlanta. The work is also available on the arXiv preprint server.
The long-standing challenge in robotics has been the gap between human motion and robotic capability. Robots typically require demonstrations to be slow, precise, and perfectly executed; any mismatch between human and robot movement could render the training ineffective.
https://github.com/AndrewTKS/Mech-Assemble-Zombie-Swarm-unlimited-diamonds-and-money
https://github.com/BrianGSN/Chainsaw-Juice-King-Idle-Shop-unlimited-gems
https://github.com/KevinRSD/Aliens-vs-Zombies-Invasion-MOD-unlimited-money-and-gems
https://github.com/PeterGNW/Idle-Airport-Empire-MOD-unlimited-money-and-gems
https://github.com/DannyTKD/Dead-Raid-Zombie-Shooter-3D-Mod-unlimited-money
https://github.com/MichaelRKT/Into-the-Dead-2-MOD-unlimited-ammo-and-money
https://github.com/ChrisEGN/The-Walking-Zombie-2-MOD-unlimited-money
https://github.com/ChrisKTV/World-of-Tanks-Blitz-MOD-unlimited-money-and-gold-2025
https://github.com/BrianKWT/Obsidian-Knight-RPG-MOD-unlimited-money-and-gems
https://github.com/ChrisJNT/Animals-and-Coins-free-energy-MOD-2025
https://github.com/FredASW/High-Seas-Hero-MOD-unlimited-money-and-gems
https://github.com/DavidKNC/Doomsday-Last-Survivors-MOD-unlimited-everything
https://github.com/RyanGSK/Invincible-Guarding-the-Globe-MOD-unlimited-gems
https://github.com/MarkEGT/Zombie-Frontier-4-MOD-unlimited-money-and-gold-2025
https://github.com/StevenAKD/Zombie-Waves-unlimited-money-and-gems-2025
https://github.com/TylerDRT/Manor-Matters-unlimited-stars-and-coins-2025
https://github.com/CodyBZT/Last-Hero-Shooter-vs.-Horde-unlimited-money-and-gems-2025
https://github.com/ScottNYD/Speed-Stars-MOD-unlimited-diamonds
https://github.com/MattGLN/Epic-Plane-Evolution-MOD-unlimited-money-and-gems-2025
https://github.com/TravisKND/Left-to-Survive-unlimited-gold-2025
https://github.com/ShaneWSN/Warships-Mobile-2-unlimited-money-and-gold-2025
https://github.com/DerekBNT/Black-Beacon-MOD-unlimited-Rune-Stones
https://github.com/JoshRBT/Walkers-Attack-unlimited-money-and-gems
https://github.com/LukeGNT/SWAT-Squad-Tactics-unlimited-money-MOD
https://github.com/EthanJNB/Massive-Warfare-unlimited-money-and-gold
https://github.com/GrantWNT/Oxide-Survival-Island-MOD-unlimited-everything
https://github.com/ColeDRN/Zombie-Forest-3-unlimited-money-2025
https://github.com/MasonBNK/X-Survive-Open-World-Sandbox-unlimited-money-2025
https://github.com/MasonSBK/The-Schedule-1-Project-unlimited-money-and-energy-MOD
https://github.com/AdamSNY/Racing-Master-Open-World-unlimited-money-MOD
"If a human performs a task even slightly differently than how a robot operates, the whole learning process can break down," said Sanjiban Choudhury, assistant professor of computer science and senior author of the study. "We wanted to find a systematic way to handle this mismatch."
RHyME addresses this issue by combining imitation learning with a retrieval-based memory system. When a robot watches a demonstration—like placing a mug in a sink—it compares that video to a collection of other task videos it has stored. It then draws inspiration from similar actions, such as gripping or lowering objects, to fill in the gaps and perform the task itself.
This method drastically reduces the amount of data required for effective robot training. With just 30 minutes of initial robot data, RHyME-equipped systems achieved over a 50% improvement in task success in lab tests compared to previous approaches.
According to Choudhury, the innovation marks a shift in how robots are programmed. “The current standard involves thousands of hours of human-guided training,” he said. “That’s just not scalable. With RHyME, we’re showing a path toward training robots using much less data, and in a way that allows them to adapt more naturally to real-world conditions.”
The team hopes this research brings the robotics field closer to the goal of general-purpose, home-assistant robots that can learn from everyday human activity—and keep going even when things don’t go exactly as planned.