Bridging Vision and Language for Robust Context-Aware Surgical Point Tracking: The VL-SurgPT Dataset and Benchmark

Abstract

Visualization Figure

Accurate point tracking in surgical environments remains challenging due to complex visual conditions, including smoke occlusion, specular reflections, and tissue deformation. While existing surgical tracking datasets provide coordinate information, they lack the semantic context necessary to understand tracking failure mechanisms. We introduce VL-SurgPT, the first large-scale multimodal dataset that bridges visual tracking with textual descriptions of point status in surgical scenes. The dataset comprises 908 in vivo video clips, including 754 for tissue tracking (17,171 annotated points across five challenging scenarios) and 154 for instrument tracking (covering seven instrument types with detailed keypoint annotations). We establish comprehensive benchmarks using eight state-of-the-art tracking methods and propose TG-SurgPT, a text-guided tracking approach that leverages semantic descriptions to improve robustness in visually challenging conditions. Experimental results demonstrate that incorporating point status information significantly improves tracking accuracy and reliability, particularly in adverse visual scenarios where conventional vision-only methods struggle. By bridging visual and linguistic modalities, VL-SurgPT enables the development of context-aware tracking systems crucial for advancing computer-assisted surgery applications that can maintain performance even under challenging intraoperative conditions.

Dataset Introduction

Dataset Visualization
Data collection and annotation workflow for VL-SurgPT. (A) In vivo surgical setup using the da Vinci Xi system. (B) Ground truth acquisition using Indocyanine Green (ICG) fluorescent markers under UV illumination. (C-D) Annotation interface for point tracking and semantic labeling at 1 fps. (E) Coverage of 7 types of surgical instruments, 9 distinct visual status descriptions, and 5 representative challenging scenarios across our dataset.
Dataset Visualization
Comprehensive comparison of public surgical point tracking datasets. VL-SurgPT is the first dataset to provide synchronized vision-language annotations for both tissue and instrument tracking in in vivo conditions. Recording Condition: ex vivo = controlled laboratory environment, in vivo = live surgical procedures. Marker Type: SW = manual software annotation, Beads = physical 2mm steel beads, IR = infrared contrast dye, UV = ultraviolet fluorescent dye. Annotation Level: V = vision-only coordinate annotations, VL = multimodal vision-language with semantic descriptions.

Annotation Workflow

The video above shows how we work with doctors to annotate 1fps videos. We use PYQT5 to create a software called Endoscopic Video Annotator (EVA) that can be used to annotate points on instruments and tissues in surgical scenes. On tissues, we mainly annotate at the junctions of blood vessels (where the visual effect is obvious). On instruments, we need to annotate key points, including the tip of the instrument, joints, main shaft, and 3-7 key points.

VL-SurgPT Dataset Visualization

Shows the location and status of our annotation points in 5 different surgical scenes and on 7 different instruments.

Visualization of point tracking (Tissue)

We show the results of tissue point tracking using TG-SurgPT (Ours) (Blue), Track-On (Red), and MFT (Green) in five different scenarios. In particular, we selected some relatively long videos for visualization (about 6-20 seconds).

Visualization of point tracking (Instrument)

We show the results of instrument point tracking using TG-SurgPT (Ours) (Blue), Track-On (Red), and MFT (Green) in five different scenarios.

Licensing

The original dataset and annotations of VL-SurgTPT cannot be used for commercial purposes.