研究情境|人工智慧、機器人
The research focuses on the application of artificial intelligence in unmanned vehicles and robotic arms, enabling systems to achieve autonomous behaviors, and further advancing them into mobile robots with intelligent perception and decision-making capabilities.
Based on the smart IoT architecture, the research has developed swarm mobile robot systems covering land, sea, air, and underwater, along with their AI perception modules. These modules can perform AI-based recognition of objects and environments, and integrate with the motion control of mobile robots and IoT platforms for seamless operation.
Swarm robots autonomously enter mission areas, collect multimodal environmental data (such as sound, posture, and images), and transmit this information to the cloud via IoT for AI model training and validation, enabling data analytics and intelligent recognition. The trained models are then sent back to on-site workstations, guiding various mobile robots to perform scenario recognition and mission execution.
Through this cyclical AI–IoT–Robot framework, the research ultimately enables mobile robots to make optimized decisions for tasks such as disaster response, security, surveillance, inspection, and combat, demonstrating the high effectiveness of intelligent autonomous systems in cross-disciplinary applications.
🌿Responding to Taiwan's Ecological Crisis: The Challenge of Green Iguanas
Taiwan is facing an ecological crisis due to uncontrolled proliferation of green iguanas, which roam rampant like miniature Godzillas. To address this issue, the "Autonomous Robotics Lab" at National Taiwan University of Science and Technology has developed the third-generation shooting and fire control system, aiming to overcome the limitations of insufficient firepower and inability to fire continuously in the second-generation system, and to provide efficient solutions for ecological balance.
🔧System Design Highlights
New Firing Mechanism
■The gun fixing and firing mechanism has been preliminarily completed, supporting manual button operation and remote network control.
■ Mounted on ground robots for precise target shooting tests.
AI Intelligent Regulation
■ After firing, AI analyzes hit rate and conducts automated accuracy calibration.
■ Controls robot posture and adjusts firing angle to respond to wide-range targets.
Future Upgrades
■ "Micro-Angle Control": Adds electrically controlled rotating turret to further improve fine adjustment capability for shooting control.
■ "Intelligent Upgrade": Uses AI to optimize shooting strategies, enhance hit rate and reaction speed, and adapt to dynamic target needs.
■ "Mobility Enhancement": Can be mounted on our UAVs.
Context
💡Inspired by the concept of a towed missile launcher vehicle, we have developed an "autonomous mobile projection robot." This robot is fully self-made, with every hardware and software component designed and developed independently by our team.
Currently, we have completed the "tele-operated" mode and successfully advanced to the finals of the "2024 Tokyo Willpower Tech Innovation Robot Competition."
System Design
🔎 We are focused on further developing the "automatic" mode, with the ultimate goal of achieving a fully "autonomous" mode, utilizing artificial intelligence (AI) technologies to optimize the robot's functions in searching, tracking, locking on, aiming, firing, and motion, moving towards complete autonomy.
Our "Autonomous Robotics Laboratory" at National Taiwan University of Science and Technology has successfully created the "AIoT Autonomous Shopping Cart for Unmanned Stores", leveraging AI technology integrated with the Advantech WISE-PaaS platform to greatly enhance both customer shopping experience and store operation efficiency.
Customer pain points solved:
Long queues → Automatic checkout
Hard to locate products → Real-time product location/stock searching
Insufficient customer service → AI virtual assistant for instant response and recommendations
Shopping mall too large → Cart tracking and location—never lost
Business owner pain points addressed:
Dispersed systems → Unified data management platform
Labor inefficiency → Automated gap-filling, reduced staffing needs
Lack of consumer insight → Big data analytics for decision support
Competitive advantage → Smart shopping technology for future stores
Our solution:
Smart shopping cart × AI staff
WISE-IoTSuite data integration
Real-time AI assistant and recommendations
Real-time location and product information
→ Building the smartest AI shopping experience!
Competition Result:
2024 Advantech AIOT Inno Works Competition | Winner
Our "Autonomous Robotics Laboratory" at National Taiwan University of Science and Technology has developed a mobile robot autonomous navigation system powered by Reinforcement Learning (RL).
Limitations of traditional navigation:
Conventional navigation systems lack self-learning capability and struggle to make optimal decisions in unknown and dynamic environments.
Key innovations in this system:
RL-based navigation strategy, enabling robots to:
Interact with the environment and learn autonomously through reward mechanisms
Identify optimal and efficient paths
Recognize and reach target positions
Avoid both static and dynamic obstacles
To reduce real-world testing risks, the system primarily undergoes offline training in simulation environments. Recognizing that real-world environments change over time, we further integrated online dynamic learning into the robot, enabling it to adapt and maintain stable, safe autonomous navigation after deployment.
Publication:
SCI Journal Paper: Lee, Min-Fan Ricky, and Sharfiden Hassen Yusuf. 2022. "Mobile Robot Navigation Using Deep Reinforcement Learning." Processes 10, no. 12: 2748.
Context
🏹In today’s highly complex, densely targeted, and dynamically changing environment, robots must rapidly distinguish between valid signals and potential threats to ensure missions are carried out safely and continuously. This demands a balance between high-speed response and precise recognition, achieving instant locking and effective defense.
System Design
🔍 Target Recognition and Tracking
The robot recognizes gestures; once the challenge password is verified, it enters "tracking" mode and moves synchronously to lock onto the target.
⚠️ Alert and Shooting
The robot detects anomaly targets entering the alert zone. Upon identifying an incorrect gesture, it immediately turns and performs "autonomous shooting."
We have developed an autonomous indoor security robot.
Traditional security monitoring is often affected by uncertainties such as lighting conditions, viewing angles, occlusions, and changes in appearance. Our work introduces an AI-enabled autonomous navigation robot for dynamic indoor patrol, featuring:
Suspicious person detection
Autonomous target tracking
Real-time alert notifications sent to mobile phones via LINE
This system maintains stable surveillance in variable environments and greatly enhances automation and intelligence for indoor security.
Publication:
SCI Journal Paper: Lee, Min-Fan Ricky, and Zhih-Shun Shih. 2022. "Autonomous Surveillance for an Indoor Security Robot" Processes 10, no. 11: 2175. Article link:
Our "Autonomous Robotics Laboratory" at National Taiwan University of Science and Technology developed a pandemic response robot, integrating AI autonomous navigation and computer vision technologies. This robot can autonomously patrol indoor and outdoor spaces to assist with various pandemic control tasks. In line with an open-sharing spirit, the work was promoted via SCI journal publication rather than patent application.
Main functions:
Autonomous movement: environmental perception, map building and localization, path planning, motion control
Object recognition: temperature detection, mask detection, face recognition
By leveraging intelligent robotics, the goal is to improve pandemic prevention efficiency and contribute to Taiwan’s epidemic control efforts.
Publication:
SCI Journal Paper: Lee, Min-Fan Ricky, and Yi-Ching Christine Chen. 2022. "COVID-19 Pandemic Response Robot" Machines 10, no. 5: 351.
We have developed a renewable energy management system for mobile robots.
The power supply for mobile robots often faces uncertainty in sustained operating time. This system uses artificial intelligence for autonomous management, allowing smart allocation and utilization of solar energy, hydrogen energy, and lithium batteries, ensuring that land, sea, air, and underwater mobile robots can continue their missions even in areas where charging is not possible, such as disaster response deployments.
System features:
Solar energy: harnessing sunlight for power generation
Fuel cell: acquiring oxygen from air and hydrogen from rainwater or stored sources
Lithium battery: providing stable power supply
AI management: intelligently distributing energy to achieve sustainable operation
This system provides a reliable energy solution for long-duration autonomous tasks across multimodal mobile robots.
Achievements:
Taiwan Patent: “Multi-Power Source Unmanned Aircraft,” Invention No. 619643
SCI journal paper: Lee, Min-Fan Ricky, and Asep Nugroho. 2022. “Intelligent Energy Management System for Mobile Robot” Sustainability 14, no. 16: 10056. Article link:
Context
📋️ Our designed "unmanned vessel" performs static and dynamic autonomous shooting over water, testing the newly designed second-generation shooting system. It flexibly responds to challenges of both static and moving targets, ensuring the shooting system operates with stability and precision. The process involves rapid target identification, real-time posture adjustments, and precise control, demonstrating comprehensive autonomous shooting capabilities.
System Design
🎯 "Static Shooting": The robot is fixed in place, completes target "aiming" and "locking," and shoots at the static target vessel located at the UAV helipad, testing the stability and accuracy of the shooting system.
🚤 "Dynamic Shooting": While moving, the robot "searches," "aims," and "locks" onto dynamic targets, shooting at the moving target vessel located at the UAV helipad, testing the system's flexibility and ability to respond.
Context
🛳️ We have completed a fully self-made unmanned surface vehicle (USV) — equipped with an AI target identification and tracking system.
This project is independently developed by our team, covering every aspect from conceptual design and hull manufacturing to artificial intelligence perception. It is the second model, the Catamaran, launched after our first Monohull vessel.
System Design
📡The system features high modularity and scalability, enabling rapid integration of new functions and upgrades in the future.
Both vessel types are equipped with UAV takeoff and landing platforms, allowing air-sea coordinated missions with drones.
Our system integrates AI autonomous navigation technology, conducting dynamic maritime patrols and identifying and tracking suspicious vessels, demonstrating the independent research and development capabilities of Taiwan's intelligent maritime technology.
Context
🚢 We have completed the first-generation fully self-made unmanned vessel — the Monohull. This vessel features a UAV takeoff and landing platform, enabling coordinated air-sea operations with drones and opening new horizons for multi-domain autonomous missions and surveillance. Faced with the diverse challenges of marine environments, the system demonstrates outstanding stability and communication reliability, further enhancing the flexibility and precision of maritime operations.
System Design
🌎️This work has successfully integrated self-made mechanisms, electromechanical systems, wireless control, stability monitoring, sensing, and communication technologies to achieve efficient autonomous navigation and mission execution. The system architecture emphasizes modular design and future scalability, providing a robust foundation for diverse mission deployments.
Context
✉️We have developed an aerial drone-based face recognition and tracking system, specifically designed for autonomous aerial surveillance and intelligent perception tasks.
System Design
🙂The system can detect faces in real time during flight, lock onto the target location, and continuously track moving targets, maintaining stable recognition and tracking capabilities even in complex environments.
This project demonstrates the integrated application of AI-based image recognition and autonomous drone control, with potential use cases in security, search and rescue, and intelligent inspection scenarios.
We have successfully developed a fully in-house, octocopter UAV suitable for maritime patrol missions, equipped with an AIS (Automatic Identification System) for vessel tracking and dynamic monitoring to assist the Coast Guard in maritime surveillance.
This UAV features:
500-meter high-altitude patrol capability (roughly equivalent to the height of Taipei 101)
Direct launch capability from coast guard vessels
The ability to transform conventionally shore-fixed AIS into an aerial movable scanning system
Significantly enhanced scan radius and dynamic monitoring range
By bringing AIS “into the sky,” the system can:
Extend detection range
Enhance maritime dynamic scanning capability
Improve marine traffic management and search-and-rescue effectiveness
What is AIS? (Automatic Identification System)
AIS is an essential automatic tracking system for ships, capable of exchanging information such as location, ship name, speed, and heading. This information can be received by shore stations, vessels, or satellites.
When received by satellite, it is called S-AIS.
AIS allows users to:
Display vessel locations and routes
Provide maritime radar information
Assist vessels in collision avoidance
Enable coast guard units to monitor maritime vessel movements
Our “Autonomous Robotics Laboratory” at National Taiwan University of Science and Technology has developed an aerial face recognition and tracking system for unmanned aerial vehicles, specifically designed for autonomous aerial surveillance and intelligent perception tasks.
The system enables real-time face detection and target localization during flight, maintaining continuous tracking of moving targets. It can sustain stable recognition and tracking performance even in complex environments.
This work demonstrates the integrated application of AI-based image recognition and autonomous UAV control, offering potential for security, search and rescue, and intelligent inspection scenarios.
Achievement|SCI journal paper: Lee, M. F. R., Li, Y. C., & Chien, M. Y. (2015). Real-time face tracking and recognition using the mobile robots. Advanced Robotics, 29(3), 187–208. Article link:
Our “Autonomous Robotics Laboratory” at National Taiwan University of Science and Technology has developed a fully in-house UAV-based AI ground vehicle recognition system.
When UAVs detect vehicles, they often face challenges such as:
Occlusion
Low illumination
Seasonal changes
Viewpoint variations
Perception interference
Traditional models cannot update themselves according to environmental changes, making it difficult to maintain stable recognition.
Our solution: Four core technologies
Distributed × federated adaptive architecture
Enables UAVs to “learn while flying,” continuously updating models to adapt to environmental changes.
Creation of uncertainty-aware training datasets
Enhances the model’s robustness to diverse environments.
Low-light enhancement module
Improves recognition performance in nighttime and dim conditions.
Explainable AI visualization techniques
Make AI detection results more transparent and trustworthy.
We have successfully developed a UAV identification friend-or-foe (IFF) and adaptive response system, capable of autonomously completing the following tasks in complex environments with varying lighting, viewpoints, and occlusions:
Identification of friend and foe
Avoiding friendly forces
Tracking hostile targets
Real-time decision-making
Explainable actions
The UAV is able to continually learn from environmental changes and provide reliable and understandable reasons for its behavior.
Experimental results show:
Faster system convergence and higher precision, effectively improving tracking and obstacle avoidance performance.
This technology can be applied to:
National defense patrols
Disaster search and rescue
Multi-vehicle collaborative missions
Aerial surveillance
Our “Autonomous Robotics Laboratory” at National Taiwan University of Science and Technology has developed a UAV-based human pose recognition and tracking system designed specifically for surveillance and disaster response tasks.
Traditional human pose recognition is easily affected by factors such as environmental lighting, viewing angle, occlusion, pose ambiguity, and overlapping of multiple people, which limit its accuracy.
To overcome these challenges, we designed a new type of drone and integrated AI technology, enabling multiple functions during aerial patrol:
Joint detection: accurately recognizes 17 keypoints (ankle, knee, hip, wrist, elbow, shoulder, neck, ear, eye)
Pose estimation: classifies 7 types of actions (kicking, punching, falling, waving, squatting, walking, standing)
Target tracking: patented instant rotational torque mechanism allows the UAV to quickly follow target movements
Pose interpretation: determines 3 situations (fighting, normal, distress)
This work demonstrates the integrated capabilities of aerial AI perception and autonomous control, with potential applications in security surveillance, disaster rescue, and intelligent inspection, enhancing rapid response and decision-making abilities.
Achievements
SCI journal paper: Lee, Min-Fan Ricky, Yen-Chun Chen, and Cheng-Yo Tsai. 2022. “Deep Learning-Based Human Body Posture Recognition and Tracking for Unmanned Aerial Vehicles” Processes 10, no. 11: 2295. Article link:
https://doi.org/10.3390/pr10112295
ROC Patent: “Torque Generating Device and Multirotor Aircraft,” Patent No. I749799, Inventors: Min-Fan Lee, Yen-Chun Chen
Context
🛶We have completed a fully self-developed miniature autonomous unmanned submarine—featuring an AI identification and tracking system. Every aspect, from design and manufaMiniature Autonomous Unmanned Submarine AI-Identification and Tracking
cturing to system integration, is independently developed by our team.
This submarine, our second model following the single-propeller version, introduces a quad-propeller configuration, offering enhanced maneuverability and mission capability.
System Design
🕛️ The system boasts strong environmental adaptability and autonomous decision-making, boosting the vehicle’s mission performance in complex underwater environments.
It integrates AI perception and autonomous navigation technologies, enabling dynamic underwater patrols and identification, locking, and tracking of "moving target submarines."
Underwater recognition faces highly challenging interferences from lighting, occlusion, seasonal changes, and varying perspectives.
Our project introduces an AI approach that overcomes the limitations of traditional machine learning (which requires explicit features) and deep learning (which demands large amounts of data), delivering a more flexible and practical smart solution for underwater vehicles.
We have developed a fully self-built, small autonomous unmanned submarine!
This work adopts a center-of-gravity-shifting control mechanism and a modified deep-water mini thruster, enabling underwater ascent, descent, and attitude adjustment, showcasing complete R&D capabilities from conceptual design and precision manufacturing to underwater testing.
This submarine continues our laboratory’s tradition of in-house development across land, sea, and air platforms, and now successfully enters the underwater domain—officially marking a milestone of building our own quad-modal mobile robots (land, sea, air, underwater)!
Potential applications include: environmental monitoring, underwater search and rescue, ocean research, and defense technology.
Achievements
ROC Patent: “Underwater Vehicle Center of Gravity Adjustment Device,” Patent No. I673206
SCI Journal Paper: Lee, Min-Fan Ricky, and Yen-Chun Chen. 2022. “An Innovative Pose Control Mechanism for a Small Rudderless Underwater Vehicle,” Machines 10, no. 5: 352.
We have successfully developed an AI-powered small submarine for underwater target search and recognition!
The system research workflow is as follows:
Underwater platform testing:
Conducted underwater navigation, image transmission, and stability verification of the small submarine in the NTUST swimming pool.
Marine image collection:
Deployed to Baduzi waters in northern Taiwan and Kenting waters in southern Taiwan, where the submarine captured a large volume of marine environment and fish images to build a substantial underwater fish image database.
AI model training:
Used deep learning to train fish species classification models. For protected species with insufficient image data, we applied data augmentation (generative AI) techniques to increase data volume and improve recognition performance.
Marine AI field tests:
Conducted follow-up submarine deployments for model validation. The AI was able to search for and recognize specific protected fish species, successfully demonstrating underwater ecological detection capabilities.
This research integrates autonomous underwater vehicle navigation, marine image big data, deep learning, and generative AI.
Potential applications include: marine conservation monitoring, fish population surveys, autonomous underwater robotics research, and underwater target search and recognition.
🇹🇼 Double Tenth National Day "Autonomous Robotics Laboratory" Annual Full Equipment Inspection and Live-Action Drill
On October 10th, 2024 National Day, our "Autonomous Robotics Laboratory" at National Taiwan University of Science and Technology carried out a grand outdoor combined drill with robots operating on land, sea, air, and underwater, and held a flag-raising ceremony. This drill was not only an annual focal mission for mastering and inspecting equipment, but also showcased the cooperative combat capabilities of our multi-domain robotics.
🎯 Drill procedure:
■ Ground robots fired upwards, escorting another ground robot carrying a drone to the flag-raising point by the shore.
■ The drone took off from the ground robot, raised the national flag, and landed on the surface robot.
■ The submarine subsequently surfaced, escorting and saluting.
■ Finally, the drone took off from the vessel, returned to the ground robot, and completed its landing.
🌐 Purpose: Achieve equipment mastery and comprehensive inspection—deploying every robot model to ensure flawless operation.
🔧 Robots not involved in the primary tasks—land, sea, air, and underwater—also participated in flag escort patrols. All drones were airborne on standby; personnel not on mission remained in place and saluted, demonstrating respect and honor for the national flag.
🔖 This joint drill was a tremendous success: realistic scenarios, precise timing, fully testing our technology and collaborative capability.
We have developed an environmental monitoring system based on the collaboration of aerial and ground robots!
The aerial robot is responsible for searching and locating items to be inspected.
The ground mobile robot receives the location and moves to retrieve the object.
A five-axis robotic arm delivers the target to a micro-spectrometer for analysis.
Spectral information is instantly transmitted back to the ground workstation.
Finally, the aerial robot autonomously searches for a spot and lands.
Achievements
Competition: Honorable Mention|2014 Intelligent Robot Creative Competition—Industrial Robot Smart Application Creative Group|Ministry of Education × Ministry of Economic Affairs
SCI Journal Paper: Min-Fan Ricky Lee, Fu-Hsin Steven Chiu, Clarence W. de Silva, Chia-Yu Amy Shih, “Intelligent Navigation and Micro-spectrometer Content Inspection System for a Homecare Mobile Robot," International Journal of Fuzzy Systems, v 16, n 3, p 389–399, September 1, 2014
Media Interview: PTS (Public Television Service), “The Independents” Documentary, Episode 397 (Unmanned Sky), May 13, 2015
We have successfully developed a land-sea-air collaborative robot system!
System workflow:
Unmanned surface vessel (sea) transports a UAV to the designated location.
UAV (air) autonomously takes off from the unmanned vessel's landing pad.
The UAV flies toward the ground robot (land).
The UAV autonomously lands on the ground robot’s landing platform, completing cross-platform cooperation.
This system demonstrates the autonomous collaboration capabilities of land, sea, and aerial vehicles and can be applied to:
Maritime inspection
Port logistics
Disaster search and rescue
Multi-vehicle intelligent cooperative missions
We have successfully developed a collaborative formation system for maritime unmanned surface vessels and underwater submarines!
The unmanned surface vessel operates on the water surface while the submarine performs missions underwater—even though both operate in environments with different drag and speed conditions, precise control enables them to maintain a stable formation.
The submarine is connected to the unmanned vessel by a cable, requiring synchronized navigation. Any speed discrepancy can cause cable tension and affect the submarine’s capability to conduct underwater reconnaissance missions beneath the vessel.
This system demonstrates cross-medium (surface × underwater) multi-vehicle autonomous collaboration technology, allowing coordinated navigation, task division, and underwater operational support in complex maritime environments.
We have successfully developed maritime-aerial collaboration technology, enabling UAVs to autonomously identify, localize, and precisely land on moving unmanned surface vessels!
System workflow:
UAV detects the unmanned vessel
Quickly locates the vessel using image and non-AI models.
Dynamic tracking × relative positioning
Maintains a stable lock on the vessel even with water surface motion.
Helipad recognition
Combines visual recognition and geometric localization to identify the optimal landing spot.
Autonomous deck landing
Achieves high-precision landing on a moving platform.
Application fields:
Maritime inspection
Offshore supply
Ocean monitoring
Multi-vehicle intelligent collaborative missions
We have developed a quad-modal (land, sea, air, underwater) collaborative system!
Mission workflow:
Ground mobile robot (land):
Equipped with a UAV, moves to a designated rendezvous point and stands by.
Unmanned surface vessel × submarine (sea × underwater):
The surface vessel navigates above water, the submarine operates underwater.
Both synchronously perform environmental scanning (detecting reefs, obstacles), collaboratively planning a safe route to the UAV rendezvous point.
UAV autonomous takeoff (air):
The UAV autonomously takes off from the helipad on the ground robot.
Cross-vehicle collaborative navigation (land × sea × air):
The unmanned vessel and ground robot both provide positioning assistance, guiding the UAV to accurately fly above the unmanned vessel.
Autonomous aerial landing (air → sea):
The UAV detects the unmanned vessel’s helipad and autonomously lands on the moving desk of the vessel.
This system can be applied to: maritime inspection, disaster rescue, port logistics, and multi-robot cooperative missions.
We have developed a ground-air collaborative robot system! The hovering aerial drone can perform real-time visual perception and map construction, and navigate ground mobile robots to their targets in real time.
Experimental results show that the ground robot, guided by aerial imagery, successfully reaches the target with an average error of only 85.89 cm! The system offers precise positioning, obstacle avoidance, and shortest path planning, demonstrating autonomous navigation capabilities in joint ground-air operations.
This technology can be applied to disaster search and rescue, environmental monitoring, and logistics transport, showcasing the future potential for intelligent cross-platform robot collaboration!
We have successfully developed a drone × ground robot collaboration system!
The drone can autonomously identify the ground robot’s landing platform from the air, stably track, and precisely land—even with ground movement or environmental disturbances, safe landing is ensured.
This system demonstrates the capabilities of ground-air collaboration and autonomous navigation technology, applicable in logistics transport, disaster response, inspection, monitoring, and various other missions.
With technology, robots are not just tools—they become intelligent partners capable of working collaboratively with other platforms to accomplish tasks!
We have developed maritime-aerial collaboration technology, enabling drones to autonomously and precisely land on moving unmanned vessels and supporting remote operation modes.
Autonomous Mode:
The drone automatically recognizes, locates, and tracks the unmanned vessel, generates the optimal landing trajectory, and stabilizes the landing process.
Remote Operation Mode:
Operators can select targets on the screen, after which the system will automatically track them, enhancing mission flexibility.
Core Modules:
① Target Recognition and Tracking
② Landing Trajectory Generation and Optimization
③ Landing Trajectory Tracking and Attitude Control
④ Search and Obstacle Avoidance (Static + Dynamic)
This technology can be applied to maritime inspection, disaster response, offshore resupply, and multi-platform intelligent missions.