Welcome to the Advanced STEM Research Class!
Using image processing and computer vision techniques to process real-time video through the front/bottom camera of the drone to recognize and locate the guiding marks, lines, or faces, and navigating the drone autonomously from starting point to destiny carrying light-weight payload based on user commands and the sensor information in the indoor environment.
Have you every thought that collecting things like attendance sheets every morning is a very tedious and boring task? Can we turn it into an automatic process such as delivering them through a drone? This is the problem inspired a group of students to create this project.
The Parrot A.R.Drone 2.0 was used as the physical platform. The Linux was used as the operation system. JavaScript and Node.js were used to program the drone. OpenCV provided a library of rich computer vision functions. The code processed the real-time video, extracted a specific color (orange) and shape (circle) of an object (an orange circular mark), calculated the area of the shape, estimated the distance to the object, and controlled the drone to fly to certain distance in front of the object. Thus, the drone could follow an orange-color circle and moved accordingly as the circle moved.
The next step was to place a series of orange-color circles in the hallway and create a path planning algorithm such that the drone would detect, count and follow these marks, and be able to navigate autonomously from one classroom to another to deliver light-weight payload according to user commands.
Acquiring, analyzing and interpreting brainwaves in order to generate mental commands to control a physical device or play a game.
Patients suffering serious stroke or spine injury can lead to paralysis of all four limbs (such as locked-in syndrome). They can still think perfectly but are unable to communicate or move their body. On the other hand, Brain-Computer Interface (BMI) has made major progresses and it is possible now to acquire brainwaves with low-cost headsets. Can we build the bridge between the brainwaves and the mental commands such that the patients can communicate with outside world or control a physical device (e.g., a wheelchair)?
Neurosky Mindwave and Emotiv Insight Electroencephalogram (EEG) headsets were used to extract the brainwave signals (alpha, beta, delta and theta waves). OpenVibe software was used to further process and analyze the captured signals.
This technology has potential to bring great impacts to the paralyzed patients. With the same technology, it will allow gamers to play their games or control a device without flipping their fingers. For example, gamers can use their thoughts to direct their avatars navigating through the virtual world. A drone pilot can fly their drones with their thoughts.
This Electroencephalography project was presented on Girls in Science and Engineering Day on the Intrepid and was well received by the visitors.
As drones became cheap and popular, it also created new safety and privacy issues. How can civilian safeguard their homes from drone invasion? The first step is to detect, locate, and track the drones entering the air space around their residences. Which types of drone detection and tracking systems will be effective and low cost? These are the questions this project is aiming to answer. A possible solution is to get inspiration from the nature.
Insects don't have sophisticated vision organ like human, however, they have very simple, but maybe thousands of independent photoreception units (compound eye) which distinguish brightness and color. Could we build a vision system inspired by the compound eye structure to detect, locate and track flying drones?
A simple compound eye digital model (a 10-face polyhedron) has been designed through a Computer-Aided Design (CAD) software. A physical model with 9 low-resolution, cheap webcams being used to capture real-time videos concurrently has also been implemented. A flying drone may be detected by multiple cameras at the same time. A scheme was proposed to find the 3D coordinates (localization) using machine learning instead of sophisticated geometric calculations. By hanging a drone at various known locations and capturing the images through all webcams, we can create a training dataset to train a neural network and find the model to map webcam images to 3D coordinates of drone locations. We can then deploy this trained model as a low cost solution.
Can a person's hand (or a robotic hand) be controlled by another person's hand? The original idea came from the research called "possessed hand" conducted by the Rekimoto Lab at The University of Tokyo. Since it is a direct connection between two hands belonging to two persons, we called it "hand-to-hand communication".
This project could be divided into two parts: (1) capturing and decoding the neuromuscular signals from one person's arm due to hand gestures, and (2) encoding and applying the electric signals to another person's (or robotic) arm to replicate the same hand gestures.
The project started from the study of the anatomy of human's neuromuscular system of hands and arms. The neuromuscular signals were captured by Backyard Brain Muscle SpikerShield paired with Arduino Uno board through medical electrode pads on test subject's lower arm. The Muscle SpikerShield is a consumer-grade, low-cost Electromyography (EMG). The values of the digitalized EMG signals along with the corresponding hand gestures were used to train a Multi-layer Perceptron (MPL) for classification of different hand gestures.
The "possessed hand" project was originally designed to teach beginners to play new instruments. It has potential to apply to control robotic hands to wok in a hazardous environment or control a virtual hand in games by replicating human hand gestures.
Charging your mobile phone while walking is the dream for mobile users. How can we design a wearable generator based on human-gaits that can efficiently charge our phones?
First, the trajectory of walking has been captured by video recording the walking on a treadmill, and followed by video analysis using Kinovea (a video annotation tool designed for sport analysis). Secondly, the possible (theoretical) energy output generated through walking has been calculated. The calculation was separated into two parts: the thigh and the leg (separated by the knee), each consists of three energy values (potential energy, translational energy, and rotational kinetic energy). A software called OpenSim for modeling, simulating, controlling, and analyzing the neuromusculoskeletal system (developed by Stanford University) was used to study the effects which adding mass of the knee has on the gait motion and energy output.
Various ideas of generator designs have been explored and their 3D models were created using CAD tools. The goal was to find the design with the best energy-generating efficiency through both simulation and experiments.
Dr. Phillip Martin of the Department of Kinesiology at Iowa State university and Dr. Todd Royer of the Department of Kinesiology and Applied Physiology at the University of Deleware had provided valuable guidance for this project.
For a visually challenged person to navigate around the New York City, the mass transportation system, especially the subway, is probably one of the most difficult and dangerous. Can we create a mobile app to help the visually challenged navigate through the subway system?
Since GPS may not be available in an underground subway station, computer vision technology was exclusively adopted to help the visually challenged people "see" their environment through the camera in their mobile phones.
The first problem to attack is to create an app that can recognize subway signs which is usually multi-colored, and include numbers and letters. An iOS app has been developed using Xcode as the IDE (Integrated Development Environment), ObjectiveC as the language, and OpenCV iOS as the library. The app could processed the color images, extracted the numbers and letters, recognize the subway signs, and translate them into audio through the iOS VoiceOver function such that a visually challenged person can "hear" the signs.
The ultimate goal of the app is to recognize all the signs in the subway stations such as station names, arrows, and structure information such as entrances/exits, ticket booths, ticket vending machines, stairs, platforms, obstacles, etc. The app will automatically provide path planning information to help the visually challenged navigate through this complex system safely.
If we let a single robot randomly move around a new environment and try to map out the environment, what will it "see" and how long will it take? Instead, if we let a swarm of robots randomly move around the same environment and try to achieve the same goal, will it take much shorter time? How can we integrate the information from all the robots? It is the core problem that the "swarm robots" project is trying to solve.
We started with simple sensors such as ultrasound range finder. Each robot has been programmed to have identical Field of View (FOV) including two parameters: angle and distance. Each robot would move along its own random, smooth, and curly two-dimensional (2D) path. Their motion has been simulated in a virtual environment using Mathworks Matlab.
The goal of the project was to develop efficient algorithms to map out a new environment including a closed boundary and 2D obstacles. The robots would detect and map out this 2D environment while avoid colliding with the environment and other robots. The environment is not limited to a 2D space. The idea could be applied to a 3D space where flying drones will be used to map out the whole environment in an efficient way.
The visually challenged people heavily rely on braille or audio to understand the world. Unfortunately, both of these technology only provide one dimensional (1D) signals. Unlike regular people enjoy the 3D world and take it for granted, they can only receive information in 1D. To develop a tactile display that provides 2D information to the visually challenged is an urgent calling and this project is one of the responses!
The project is divided into two parts: (1) Learning and evaluating existing tactile programming device such as the Senseg FeelScreen technology on a Nexus tablet. (2) Design a real 2D tactile display based on electrovibration phenomena.
A test program has been developed to run on a Senseg Nexus tablet to demonstrate the understanding of tactile programming and explore the limitations of the tablet. The FeelScreen technology was amazing, and the finger would sense different tactile feedback at different locations of the screen based on the program. However, the whole screen acted as just a single pixel. In other words, the tablet can still only deliver 1D signals. It would be very hard for a visually challenged person to sense the 2D shape on the screen by using only one finger. A new multiple-pixel tactile display has been proposed to provide the real 2D display where the visually challenged can sense the whole screen at the same time.
High School Autonomous Vehicle Competition (HSAVC) is an annual national competition where teams compete with their pre-programmed cars racing on both fixed and random tracks based on the vision input autonomously. Due to the pandemic, the preparation and competition of this in-person activity were both impacted. Is it possible to create a software version of this competition such that students across the country, or even globally can still develop their algorithms, test their code's performance, and race online? This is the ultimate goal of this project during pandemic!
Two Mathworks Matlab apps have been developed using the App Developer: (1) Track Generator, and (2) Autonomous Driving Simulator. The Track Generator was used to generate random HSAVC racing tracks based on user's specification (number of sections). The generated track would be used for simulation. The Simulator allows users to input their Matlab scripts to process video input from the linescan camera (a 1D camera), and determine the steering angle of the front servo and values of the two rear driving motors based on their algorithms. Then the Simulator will simulate the virtual car running on the generated virtual track based on their Matlab scripts.
The Track Generator created random track starting from the smallest possible track- a circle (4 sections). It then used growth and mutation algorithms to create random track of any specific size. The Simulator consisted of four modules: camera model, track model, vehicle model, and motor controller. It looped through the modules to create the continuous driving simulation.
The project has been presented to the host of HSAVC, Dr. Marc E. Herniter (Professor Electrical and Computer Engineering Department at Rose-Hulman Institute of Technology) and the representative, Ms. Akshra Narasimhan Ramakrishnan, from Mathworks.
It was also submitted and accepted by the 2021 IEEE Integrated STEM Eduction Conference, an international STEM conference. It has won the 3rd place of Best Poster Award in the Engineering category!
F1 in Schools STEM Competition is a STEM-intensive competition. From car design, manufacturing, and testing, various technologies have been integrated into the competition. Especially, advanced computer tools have been adopted to shorten the design cycles, accelerate and improve the manufacturing process, and push the race car performance to the limit.
Computer-Aided Design (CAD) - create the 3D models of the model cars.
Computational Fluid Dynamics (CFD) - predict the aerodynamic performance of the car before manufacturing
Finite-Element Analysis (FEA) - analyze the mechanical strength of the car structure
Computer-Aided Manufacture (CAM) - generate tool paths for manufacturing.
Computer Numeric Control (CNC) - manufacture the car body from a foam block.
3D-Printing software - manufacture front and rear wings and wheel systems.
However, there is a missing link between CFD and car manufacturing through CNC routers and 3D printers. Though the drag coefficients and drag forces can be predicted computationally through CFD, competition teams still relied on the physical track testing to determine the race time of race cars- the ultimate performance measure of the cars.
This project is to create a race time calculator, i.e., a virtual track to predict the ultimate performance of race cars without manufacturing them.
The results of this research project has been integrated into the Engineering R&D efforts of Roosevelt Racers during the 2020 F1 in Schools STEM Competition. Due to the pandemic, the 2020 competition was conducted virtually and many teams suffered from difficulties of testing their cars on physical track. The race time calculator allowed the whole car design and testing process to run on computer!
After presenting Race Time Calculator in the competition, Roosevelt Racers won the Best Research & Development Award of 2020!