Automatic robotic precision task using vision-language-action(VLA) models and fiberoptic sensing
Automatic robotic precision task using vision-language-action(VLA) models and fiberoptic sensing
"Advancements in Vision-Language Models (VLMs) have revolutionized robotic manipulation, enabling agents to handle diverse and complex tasks. However, high-stakes medical procedures, such as needle insertion, demand precision beyond visual perception alone. This research proposes a novel Vision-Language-Action (VLA) model tailored for precise needle insertion by fusing visual data with feedback from fiber-optic sensors. This multi-modal integration allows the model to generate fine-grained actions, ensuring safety and high precision in robotic control."
Vision-Language-Action (VLA) Model, Robotic Needle Insertion, Multi-modal Sensor Fusion