1. Home
  2. Medical News
  3. Surgery
advertisement

AI System Identifies Surgical Phases in Robot-Assisted Prostatectomy Across Multiple Surgeons

ai driven surgical phase recognition in prostatectomy insights and applications
10/24/2025

Researchers at Keio University School of Medicine have developed and validated an artificial intelligence (AI) model for automated recognition of surgical phases in robot-assisted radical prostatectomy (RARP). The study, published in Annals of Surgical Oncology, describes the use of a deep learning architecture—Temporal Convolutional Networks for the Operating Room (TeCNO)—to classify surgical video frames into predefined procedural steps.

The AI was trained using data from 65 RARP procedures performed by one surgeon (designated as surgeon A) and tested on two additional sets: 16 procedures by the same surgeon and 21 procedures conducted by five other surgeons (surgeons B–F). Across all datasets, each surgical video was manually annotated by board-certified urologists and divided into nine standard phases based on anatomical and procedural landmarks.

During testing on surgeon A’s procedures, the AI achieved a precision of 0.94, recall of 0.93, F1 score of 0.94, and overall accuracy of 0.93. When applied to procedures from surgeons B–F, the model recorded a precision and accuracy of 0.83. According to the authors, classification performance decreased for four phases in the multi-surgeon dataset—specifically those involving endopelvic fascia and lymph node dissection. These discrepancies were attributed to inter-surgeon differences in surgical sequence and scope usage.

The model classified each video frame at one-second intervals, using a total of over 1.38 million annotated frames. The nine phases included preparation, dissection steps, transection, anastomosis, and lymphadenectomy. “No step” frames, corresponding to moments when the surgical camera was outside the body, were excluded from phase categorization.

The study reports that the AI maintained performance despite variations in procedural flow and surgical approach within the training dataset, including optional phases like lymph node dissection and differences in dissection side. In comparison to previous phase-recognition systems across various laparoscopic and robotic procedures, the current model achieved a comparable or higher level of accuracy while classifying a relatively high number of surgical phases.

The authors note that variability in technique, even within a single procedure type, presents challenges for consistent phase recognition. They also point to the potential for errors in situations not represented in the training data. The study mentions the “black box” nature of deep learning models as a limitation, particularly in contexts where interpretability of algorithmic decisions is important.

The model was implemented in Python and trained on one frame per second from each surgical video, with an average of 13,575 frames per case. Performance metrics were calculated by comparing predicted phase labels to manual annotations for each frame.

This study includes the first reported cross-surgeon validation of an AI model developed specifically for phase recognition in RARP. The authors suggest the system could be used to support post-operative analysis and to examine variability in surgical technique, though they acknowledge that further research is needed to evaluate its clinical integration.

Register

We’re glad to see you’re enjoying ReachMD…
but how about a more personalized experience?

Register for free