RoboFusion
Pioneered a Machine Learning-driven sensor fusion system to dynamically control a 4 DOF robotic arm via hand gestures, ensuring real-time, accurate human-robot interaction.
Led integration of ROS2 with MediaPipe for 3D hand-tracking and MPU6050 sensor data fusion, creating a robust, noise-resistant closed-loop control system for precise robotic arm manipulation.
Learning Outcome:
Gained expertise in sensor fusion and machine learning for real-time robotic control, enhancing human-robot interaction using ROS2 and MediaPipe. Developed a noise-resistant closed-loop system, improving gesture-based precision and motion stability.
Team Members- Ankit Gole, Shreya Boyane, Shivam Shinde, Simran Chauhan
Robotic Arm Simulation and Control using ROS2
Developed a Gesture-Controlled Robotic Arm using Machine Learning, achieving 90% accuracy in real-time operations.
Utilized MoveIt and Gazebo for simulation and control to refine system responsiveness and accuracy.
Implemented the RRT algorithm for motion planning, significantly enhancing the precision and responsiveness of the system.
Learning Outcome:
Strengthened skills in machine learning for real-time control, achieving high-accuracy gesture recognition. Gained hands-on experience in motion planning with RRT, enhancing robotic arm responsiveness and precision.
Semi-Autonomous Agricultural Robot
Designed a 4x4 mobile agricultural robot featuring real-time plant disease detection using computer vision and machine learning.
Constructed an IoT dashboard for real-time updates on plant health and sensor data, achieved a 30% reduction in manual labor by implementing effective sensor data integration and AI-driven decision-making processes.
Learning Outcome:
Enhanced proficiency in computer vision and machine learning for real-time agricultural automation. Gained experience in IoT integration and AI-driven decision-making, optimizing efficiency and reducing manual labor.
Team Members- Nilay Jadav, Aum Barai
Rover Sentinel
Designed and engineered a six-wheeled autonomous rover incorporating the Rocker-Bogie mechanism, enabling robust mobility and adaptability across diverse terrains for optimal performance.
Integrated Depth Camera, NVIDIA Jetson Xavier, and GPS, achieving 93% accuracy in real-time obstacle detection using a custom-trained YOLOv8 model, enhancing autonomous navigation and environmental awareness.
Implemented onboard fire detection leveraging Computer Vision and multi-sensor fusion, with real-time IoT-based alert notifications, ensuring proactive safety measures for campus security.
Learning Outcome:
Enhanced expertise in robotic mobility and terrain adaptability using the Rocker-Bogie mechanism. Gained proficiency in computer vision, AI-driven obstacle detection, and IoT-based safety systems, optimizing autonomous navigation and real-time hazard response.
Team Member- Aum Barai
Biometric Attendance System
Developed a scalable and cost-effective real-time IoT fingerprint attendance system using Firebase and ESP32, incorporating cloud based data storage for secure and efficient management.
Designed and 3D-printed a compact, portable housing using PLA, making the system small enough to carry in a pocket while ensuring durability and ease of deployment.
Successfully implemented the system at EV Lab, achieving a 40% improvement in attendance tracking efficiency.
Learning Outcome:
Gained hands-on experience in IoT-based biometric systems, cloud integration with Firebase, and 3D printing for compact enclosures. Improved real-time system deployment and hardware design for enhanced portability and efficiency.
Team Members- Nandani Dholakia, Mansi Antala