Check out my recent talk as a finalist at the 10th Annual Texas A&M University Three Minute Thesis Competition. My talk was about my recently developed mobile app with a built-in AI model for real-time flood depth estimation on urban roads.
Available at https://www.youtube.com/watch?v=1XkTXpLnuDs
Developing image processing methods to detecet and measure traffic signs in photos
In this project, a new approach to estimating floodwater depth in street photos using stop signs as measurement benchmarks was introduced and validated. Using data mining techniques, an in-house dataset, named BluPix 2020.1, consisting of paired web-mined photos of submerged stop signs across 10 FEMA regions (for U.S. locations) and Canada, was generated and used. To detect the octagon shape of the sign, a deep neural network called Mask R-CNN was utilized, and the image was subsequently processed by image processing techniques, i.e, Canny edge detector and probabilistic Hough transform to detect vertical edges and discover potential pole candidates. Since stop signs have standard dimensions, the number of inches corresponding to one pixel in the photo was calculated and used to determine the pole length in inches, and the floodwater depth was ultimately estimated as the difference between pole length values of the same stop sign in a pair of pre- and post-flood photos taken from the same location. Overall, pole length was estimated with an RMSE of 17.43 and 8.61 in. in pre- and post-flood photos, respectively, leading to a mean absolute error of 12.63 in. in floodwater depth estimation. (More info)
Developing computer vision models for flood depth estimation in street level photos
In this project, a deep neural network (YOLOv4), was adopted using the transfer learning technique on an in-house dataset of crowdsourced stop signs photos. The model was trained on pre-flood (334 photos from the Microsoft COCO dataset, and 61 web-mined photos of other traffic signs with zero label) and post-flood (270 web-mined photos, 71 web-mined photos of other traffic signs with zero label, and 64 synthetic photos) photos of stop signs. To address the overfitting problem on the training set, the optimum number of iterations was obtained using 5-fold cross-validation. Various data augmentation methods were also implemented to increase the dataset size. The trained model was then used to estimate flood depth at the location of the stop sign, by comparing the visible parts of the pole length in pre- and post-flood photos. Testing the model on 11 sample photos from the 2021 Pacific Northwest flood yielded an MAE of 6.978 in. for flood depth estimation, which is on par with error values reported in previous studies. (More info)
Developing a mobile app with a built-in computer vision model for flood depth estimation
In this project, an Android mobile application (called Blupix) was developed using Android Studio with embedded computing functionality for on-demand floodwater depth estimation from geocoded photos of submerged stop signs. Blupix allows the user to capture a photo of a submerged stop sign using the camera of the mobile device, automatically obtains and stores the geographical location and azimuth angles of the mobile device from accelerometer and magnitude sensors when the photo is taken, provides an interactive interface in Google Street View to locate the stop sign in a flood-free view, runs a lightweight object detection model (EfficientDet03) trained to detect stop signs and poles and measure their sizes in pre- and post-flood images, and finally, calculates the depth of floodwater as the difference in pole lengths. For each image, there are three APIs to load and run the object detection model: (1) Preparing the image (Tensor image), (2) Creating a detector object, and (3) Connecting 1 and 2. The captured image is decoded into the Bitmap format and passed to the object detection model.
The interface of the developed mobile app
Capture images
Built-in computer vision model for real-time object detection in captured images
Geolocating images using data extracted from geomagnetic field sensors
I developed this app for real-time estimation of flood depth in urban roads by taking photos of flooded stop signs, finding the same stop sign in google StreetView and taking a snapshot, implementing the built-in computer vision models to detect traffic signs in both images, comparing the pole length before and after the flood.
Implementing route optimization using Geographic Information Science (GIS) and A* algorithm
In this project, the flood depth estimation model 1 (using Mask R-CNN and image processing) was tested on six sample photos of stop signs taken during Hurricane Harvey, Houston, Texas in 2017 to calculate flood depth. The estimated floodwater depth data was then converted to a flood inundation map using ArcGIS Pro. Next, a distance-decay function was implemented to transform point-by-point floodwater depth data into area-wide flood inundation maps. The generated map was used to develop a risk-informed routing system based upon the A* search algorithm to calculate the shortest flood-free route between points of interest (i.e., intelligent wayfinding).
Developing a web application for crowdsource data collection and mapping
Developed a web application strategy and lead a student team from the Department of Computer Science to develop a crowdsourcing platform for flood events. This app is developed using Django and React.js. After the development, I am responsible for reviewing and analyzing crowdsourced visual data with artificial intelligence (AI) models to extract point-by-point floodwater depth information.
Available at: https://blupix.geos.tamu.edu/
Presenting research findings and getting input from community and first responders through community meetings
To design an effective technology-driven solution, it is crucial to get input from the public. To this end, a community needs assessment survey was conducted (Qualtric survey) and the results were implemented in developing decision support systems for integrating advanced cyberinfrastructure, decision science, and GIS to enable optimal decision-making in flood emergency management. Also, in these meetings, communities were informed of the developed affordable and easy-to-use technology-driven solutions for flood depth estimation and instructions were given.