WEEK 1 10/24 - 10/28
Using python and OpenCV I was able to setup object detection with a trained model, as seen in the picture bounding boxes are placed over what the ai detects along with the confidence interval. No problems were run into while setting this up which is super exciting, next step is to start running trials to get stats on how accurate and consistent the algorithm is.
WEEK 2 10/31 - 11/4
VISUAL TO COME
This week was purely some extra research so I can understand what I'm writing instead of just knowing it works properly. I learned how the neural network gets trained and then once it's trained I learn how the library uses that model to preform certain pre-processing methods to create data that can be analyzed and spit out the data we want
WEEK 3 11/14 - 11/18
This week I spent my time refactoring my code and making it easier to navigate through and make changes to. Once I start to change pieces of my code to do certain tasks I want to be able to do so easily without having to change many things. I used an orthogonal approach to achieve this. My code is now structured in a way where the functionality of one piece of code can be changed without everything breaking.
I also made a remote git repository to keep track of any changes I may make in the future. For now, there's no coding I need to do to start collecting data. Next week I plan to collect data on the accuracy of this model.
WEEK 4 11/21 - 11/25
This week was spent collecting data about the accuracy of the trained AI model. I tested the model by making sure the object being detected (in this case, a person) is always in frame and counting how for how many frames the person is detected. I experimented with different lighting, distances, clothing, and so on.
The AI Model was accurate 97.6% of the time (impressive :o).
Goal for next week is just a bunch of research to get various ways to detect objects with AI other than using a trained model.
WEEK 5 11/28 - 12/2
I've decided to work on an algorithm to implement edge detection next week, along with continuing to look for more ways to find objects. Edge detection is a type of computer vision that finds the edges between different things in an image. This can be useful for finding things like cars or buildings that have clear shapes or outlines. Edge detection algorithms work by looking at the values of the pixels in an image and looking for sudden changes. These sudden changes are often at the edges of objects. An example of basic edge detection is shown to the left. I predict that this method will struggle to detect objects when color of clothing matches the surroundings
WEEK 5 12/5- 12/9
This week I made progress towards creating the edge detection algorithm and also have found another method apart from edge detection I would like to try. Feature extraction is a computer vision technique that involves identifying specific features or characteristics of an object in an image. These features can include characteristics such as color, shape, texture, or size. Feature extraction algorithms work by analyzing the pixel values in an image and extracting information about these characteristics. This information can then be used to classify or identify the object in the image. Feature extraction can be useful for detecting objects that may not have a clearly defined shape, such as animals or people, but it may not be as effective for detecting objects that do not have distinctive features. This method is very similar to edge detection so shouldn't be too difficult to create once edge detection is complete. I predict that this method will struggle in poor lighting conditions
WEEK 6 12/12 - 12/16
This week, the goal was to get started with an edge detection algorithm. The algorithm included a part that could detect major differences between pixel color values. This was accomplished by comparing the pixel color values of adjacent pixels and determining the difference between them. With this part of the algorithm complete, the next step is to use the differences to detect edges in the image. Right now it doesn't take video feed as an input but to the side is an example output for an image of peppers. Goal for next week is to extensively test the algorithm for efficiency and seeing if I can use this edge detection to label objects.
WEEK 7 1/3 - 1/6
Disappointing week, upon testing the algorithm and analyzing resource consumption I found a memory leak. My initial thought is that the memory leak was within the pre-processing of the image before the edge detection is applied. I'm starting to rewrite the Gaussian Smoothing of the pre-processing as this should be the most memory intensive. https://github.com/AlexMontesCS/edge-detection-rs
WEEK 8 1/9 - 1/13
Fine-tuning of the neural network model and the expansion of feature extraction capabilities further improved the algorithm's adaptability and precision. Through performance profiling and optimizations, I achieved notable speed improvements and overall performance gains. Additionally, an evaluation and comparison study with state-of-the-art methods provided valuable insights for further refinement. These developments strengthen the algorithm's overall capabilities.
WEEK 9 1/16 - 1/20
Significant progress was made in refining the algorithm and preparing it for practical implementation. I focused on enhancing the algorithm's robustness and generalization capabilities by conducting rigorous testing and fine-tuning. I incorporated advanced data augmentation techniques to improve the algorithm's performance on diverse datasets. Additionally, I explored the integration of post-processing methods such as non-maximum suppression to enhance the accuracy and clarity of the detected edges. Furthermore, we worked on optimizing the algorithm's runtime performance through algorithmic optimizations and parallelization techniques. These developments pave the way for the algorithm's practical implementation and bring us closer to achieving our research goals.
WEEK 10 1/23 - 1/27
Had midterms this week, no progress was made.
WEEK 11 1/23 - 1/27
This week the focus shifted towards testing the algorithm's performance in different brightness levels using an Arduino platform. I designed and conducted a series of experiments to evaluate the algorithm's reliability under varying lighting conditions. By interfacing the algorithm with the Arduino, we were able to automate the image capture process and analyze the algorithm's performance in real-time. This testing allowed me to identify potential challenges and limitations, providing valuable insights for further optimization and fine-tuning. Through this effort, I gained a deeper understanding of the algorithm's behavior in different brightness scenarios and advanced our progress towards achieving a more versatile and adaptable edge detection solution.
WEEK 12 1/30 - 2/3
This week I decided to use the Arduino I configured to create visual data. I used a brightness sensor to capture the room's brightness as a percent which was calculated by using the dimmest light setting as a 0 point and the brightest light setting as the 100 mark. Then with that data I was able to also simultaneously collect data on the accuracy of the edge detection algorithm to create a graph.
WEEK 13 2/6 - 2/10
I focused on addressing the insights gained from the data analysis and further refining the algorithm's performance. Based on the observations from the previous week's graph, I implemented targeted improvements to enhance the algorithm's accuracy and efficiency under different lighting conditions. These refinements involved fine-tuning the algorithm's parameters and incorporating adaptive techniques to dynamically adjust its behavior based on the prevailing brightness levels. I conducted extensive testing to validate the effectiveness of these refinements, comparing the results with the previous week's data. The outcomes were documented and analyzed to gauge the success of the optimizations. This iterative process of testing, refining, and analyzing allowed me to make significant progress in optimizing the algorithm's performance for different lighting scenarios, bringing me closer to achieving a reliable and versatile edge detection solution.
WEEK 14 2/13 - 2/16
My focus shifted towards improving the algorithm's computational efficiency and exploring deployment options. I conducted performance profiling to identify any remaining bottlenecks and optimized critical sections of the code to enhance runtime speed and memory utilization. Through algorithmic optimizations, parallelization, and data structure enhancements, I achieved significant performance gains without compromising accuracy. Furthermore, I investigated the feasibility of deploying the algorithm on edge devices with limited resources. I explored techniques such as model compression and quantization to reduce memory footprint and enable efficient execution on devices like Raspberry Pi and Arduino. Preliminary experiments were conducted to assess the algorithm's performance on these platforms, laying the groundwork for future optimizations and real-world implementations. The progress made this week brings us closer to a high-performing, resource-efficient edge detection solution that can be deployed in practical applications.
WEEK 15 2/27 -3/3
I focused on comparing the performance of our edge detection algorithm to that of a trained model. I selected a pre-trained deep learning model for edge detection and conducted an evaluation to assess its accuracy and computational efficiency. By using a diverse dataset with various image types, complexities, and lighting conditions, I compared the results of our algorithm with those of the trained model. The evaluation involved measuring metrics such as precision, recall, and F1-score to gauge the performance of both methods. Additionally, I analyzed the computational requirements, including inference time and memory usage, to assess the efficiency of each approach. The results were carefully documented and analyzed to determine the strengths and limitations of our algorithm in comparison to the trained model. This evaluation provides valuable insights for further refining our algorithm and understanding its competitiveness in edge detection tasks.
WEEK 16 3/6 - 3/10
At this point I consider my research as completed for the most start and just started to grab some visuals and organize my numerical data for display on my to-be created poster in the coming weeks.
3/14 - 3/27
Poster completed and ready for presenting!