close
close

The robot learns surgical skills equivalent to human precision by watching videos

The robot learns surgical skills equivalent to human precision by watching videos

by Clarence Oxford

Los Angeles CA (SPX) November 12, 2024






A surgical robot trained by watching videos of experienced surgeons has successfully performed surgical tasks with a skill level equivalent to that of human doctors.

This breakthrough in robot training uses imitation learning, greatly simplifying the process of programming surgical robots. By using visual input instead of encoding every movement, this approach expands the potential of robots to perform complex operations autonomously.

“It’s really magical to have this model. All we do is feed it camera input and it can predict the robot movements required for the operation,” said lead author Axel Krieger. “We believe this represents a significant step forward toward a new frontier in medical robotics.”

The research presented at the Robot Learning Conference in Munich highlights collaboration between Johns Hopkins University and Stanford University. The team trained on the da Vinci Surgical System, known for its widespread use but also its precision limitations, to perform tasks such as needle manipulation, tissue lifting and suturing. Unlike traditional training, which requires precise, step-by-step programming, this model leverages machine learning similar to that behind ChatGPT. Instead of processing language, this model interprets kinematic data, breaking down robot movements into mathematical expressions.

The researchers trained their model using hundreds of wrist camera images taken by da Vinci robots during surgeries. These recordings, collected worldwide for postoperative analysis, provide a rich data set for imitation learning. The da Vinci system, used in nearly 7,000 units worldwide and trusted by over 50,000 surgeons, provided extensive video data.

The innovation is to train the model to recognize and execute relative movements, thereby avoiding inaccuracies associated with absolute actions. “All we need is an image input and then this AI system finds the right action,” explained lead author Ji Woong “Brian” Kim. With just a few hundred demonstrations, the model can learn and adapt to new environments.

The robot demonstrated its competence in carrying out the selected surgical tasks, thereby reflecting human skills. Remarkably, it adapted to unexpected situations, such as picking up a dropped needle on its own. “This is where the model is so good at learning things we didn’t teach it,” Krieger noted.

The researchers envision rapid training for various surgical procedures, as opposed to the lengthy manual coding previously required. “It’s very limiting,” Krieger said. “The new thing here is that we only need to collect replicas of different procedures and we can teach a robot to learn it within a few days. This will allow us to achieve the goal of autonomy more quickly while reducing medical errors and performing more precise surgical procedures.”

The team is now working on expanding this method to train robots for complete surgeries. Contributors from Johns Hopkins included graduate student Samuel Schmidgall, associate research engineer Anton Deguet and associate professor Marin Kobilarov. The Stanford team included graduate student Tony Z. Zhao.


Related links

Johns Hopkins University

Space medical technology and systems

You may also like...