Test Results

Here you will find our test results. All tests have been conducted, info regarding the objective, setup, results, and conclusion are located here. Additionally, pictures showing the setup, and graphs related to the results are shown alongside!

Sensor Testing FT.1 (COMPLETE)


The voltage responses over time for the Figaro 2602 sensor

The voltage responses over time for the Figaro 2612 sensor

The voltage responses over time for the Figaro 2620 sensor

VOC Sensor Circuit Diagram




Objective: Acquire sensor response data from two different brands of hand sanitizer using a standardized testing procedure to train our algorithm. Determine if the hand sanitizers contain different concentrations of Acetone, Ethanol, and Methane detectable by the sensors and if residual VOCs affect further readings.


Setup: Figaro VOC sensors utilize a variable voltage divider circuit to translate chemical concentration into measurable voltage differences. When in the presence of a chemical, a variable resistor will change the output voltage accordingly. For this test, three sensors were placed in a mason jar with hand sanitizer for one minute and output voltage values were recorded. After one minute, sensors were removed from the mason jar and voltage values were once again recorded. This process was repeated for 10 minutes for two different brands of hand sanitizer. Voltage values were measured using Arduino Uno analog pins, at 10 second intervals. Sensors were allowed to rest for 20 minutes between testing. Testing occurred outside to allow airflow.

Once we obtained our STM32 MCU’s, we remade our code within the ST IDE to utilize the three available Analog to Digital Converters (ADCs) and we refined our testing method to the following:

  • Warm up sensors for 5 minutes

  • Collect ambient air VOC data for 1 minute

  • Isolate sensor circuit with hand sanitizer and collect data for 5 minutes

  • Remove hand sanitizer and collect ambient air data for 1 minute

  • Repeat procedure with second type of hand sanitizer after 2 minutes

  • Repeat procedure with ambient air for a set of reference VOC data


Results: Isolating each sensor in a glass mason jar containing one of two brands of hand sanitizers, Germ-X or Equate, resulted in different voltage readings. Each brand of hand sanitizer had a unique yet similar maximum output sensor voltage that can be used to train our machine learning algorithm. For the Figaro 2602 sensor, Germ-X hand sanitizer had an average maximum output voltage of 3.824V, and exposure to Equate hand sanitizer had an average maximum output voltage of 4.466V. When the Figaro 2612 sensor was exposed to Germ-X hand sanitizer, it resulted in an average maximum output voltage reading of 2.72V, and when it was exposed to Equate hand sanitizer, it resulted in an average maximum output voltage of 2.8V. The Figaro 2620 sensor had an average maximum output voltage of 3.901V when exposed to Germ-X, and an average maximum output voltage of 4.29V when exposed to Equate hand sanitizer.


Conclusion: The hand sanitizers contain different concentrations of Acetone, Ethanol, and Methane that are detectable by the sensors. We will be using this standardized testing method to acquire VOC data and will continue to improve the procedure as needed.


The setup used to collect data from our VOC sensors

Training A Preliminary Algorithm FT.2 (COMPLETE)


Figure illustrating our algorithms accuracy


Figure showing data overlap


Objective: Train a preliminary machine learning algorithm so that it is able to differentiate between the two different classes that are presented to the algorithm.


Setup: Once data files are loaded into the program, the data is labeled according to class. The sensor values are then plotted and the steady state is visually selected. This steady state data is then all stacked into one csv file and shuffled to ensure randomness. This data is then fed into the neural network, which will output a class and the probability that it is correct.

Results: Using provided sensor data, we were able to write a machine learning algorithm within Google Colab that displays sensor steady state values, plots sensor data overlap, and successfully outputs the classification with 80% accuracy. Future implementation of the machine learning algorithm will use sensor data that we acquire from live tests..


Conclusion: The data is sufficient for training-- to improve accuracy, the neural network can be made deeper and wider to include more layers and making the layers larger. Having less overlap in the data would also improve results. The next step is to implement this onto the microcontroller for validation.

Raw sensor data

First GUI FT.3 (COMPLETE)


An early version of the GUI created through the Arduino IDE


A snippet of the python code showing the touch configuration


Objective: Connect the touchscreen to a programming interface and implement simple to use GUIs with limited functionality to see how usable they are when interacted with.


Setup: Using the Arduino IDE a simple GUI was created using Python. An Arduino was connected to my laptop and connected to the LCD display via jumper wires, the display had pins soldered into its 8-bit mode and was connected to a breadboard.


Results: Using Python we were able to easily interface with the touchscreen and create a simple preliminary layout. This is primarily because the libraries for the screen exist directly from Adafruit for use on Arduino. We were able to add functionality to the buttons and even configured it to light an external LED connected to the Arduino


Conclusion: This test is still in progress but we found that creating the GUI and adding the functionality is just a matter of creating different shapes at different coordinates of the touchscreen and enabling touches in those regions to correspond to a function. The next step is to recreate this for the STM32. The issue we foresee is finding libraries compatible with the STIDE.


A snippet of the python code showing how the home page layout was constructed

Using Predetermined Data To Simulate ML VOC Classification FT.4 (COMPLETE)


A view of how data clean up is done from Google Colab

A view of the different layers of our algorithm

Objective: Train the machine learning algorithm so that it is able to differentiate between the three different classes that are presented to the algorithm.


Setup: Once data files are loaded into the program, the data is labeled according to class. The sensor values are then plotted and the steady state is visually selected. This steady state data is then all stacked into one csv file and shuffled to ensure randomness. This data is then fed into the neural network, which will output a class and the probability that it is correct.

Results: The algorithm successfully output the classification with 76% accuracy.


Conclusion: The data is sufficient for training, and is barely at the accuracy level we wish. To improve accuracy, the neural network can be made deeper and wider, as well as including more data into the training set.




The Loss vs Accuracy graph of our current algorithm showing an accuracy level of 76%

Implementing A Fully Functional GUI FT.5 (COMPLETE)


Pinout of the MCU, the LCD, uses SPI1’s SCK, MISO, MOSI, and LCD CS, DC, RST. The associated buttons use 4 GPIO EXTI pins

The configuration used for SPI1

A look at how the homepage looks on the LCD, note the buttons for the home and settings page on the sides, as well as the D-pad on the left

Objective: Create a simple but functional multi-page GUI capable of: initiating and displaying the collection of sensor data, managing the external memory card, and allowing users to set up timers to help with their testing schedule process. The GUI should occupy as little space as possible, to save memory and reduce what’s being communicated to the LCD over SPI. It is navigated via a Dpad.


Setup: The LCD was set up and connected via 6 pins from the MCU. It communicates through SPI in its Full-Duplex Master mode where data is transmitted at a rate of 40 Mbps. Dpad buttons were created by creating GPIO inputs via GPIO_EXTI. Using the ST IDE and C, multiple pages were created through the use of a library capable of drawing shapes and displaying text through our LCD. Functionality is achieved by creating and programming functions to correspond with certain actions, and programming different visual effects on the screen when said actions occur.

Results: The LCD is capable of displaying all the pages I created in the C program and is fully navigable using a Dpad. The user can see where they are via a button animation, a separate animation is played when a selection is made. In total the GUI occupies about 59.3KB of memory, half of which is due to the library needed.



Conclusion: The GUI is fully functional and successfully serves its development purposes, any more additions needed should be easy to implement.




A snipped of the C code used to create the layout and a couple of the button assignments for the main page

Implement A Timer System That Sounds A Buzzer When Complete FT.6 (COMPLETE)


A snippet of the C code illustrating how one timer was implemented


The buzzer that sounds when a timer finishes its countdown

Objective: Create timers accessible from the user interface capable of accurately counting down from 5 different time periods. Have a buzzer sound when a timer finishes, and have the page clean itself back up.


Setup: A buzzer is connected through one of the MCU’s GPIO pins and programmed to switch on when the timer countdown condition is met then reset itself. Using delays, the LCD is programmed to show a new value each second.

Results: The timers are accurate, comparing them with a timer on a smartphone revealed no noticeable time difference.


Conclusion: The timers work as they should, they’re simple to use using the GUI, and the buzzer loudly alerts the user when a countdown has been completed.


The timer page of the GUI with the 5 minute countdown initiated

3D Case Design and PCB Fabrication FT.7 (COMPLETE)


The prototype Fusion 360 model of the case


Side profile of the case showcasing the programming port and SD card slot


A closer look at the snap fit joint

The revision of the case

Top cover of the case showing the LCD, sensor ports, D-pad, home and settings button, and LED port to show device power status


Sensor interfacing circuit PCB designed in Eagle

Button board PCB designed in Eagle

Power bus PCB used to power and connect all peripherals to the microcontroller designed in Eagle

Objective: Design a housing to enclose all of the components in our project. The case needs to be compact, be structurally sound, and feature a way to easily access our VOC sensors. Design custom printed circuit boards (PCBs) to manage connections between peripherals and the microcontroller.



Setup: A preliminary case model was made in Fusion 360 and fabricated using Dr. Shrestha’s 3D printer. Once all necessary hardware components for the project were assembled, a revision of the preliminary design was created and fabricated using Dr. Shrestha’s 3D printer. Custom PCBs were designed in Eagle and fabricated using an online vendor.


Results: We have designed a case capable of housing all of the main components of our project, is easy to hold, and was designed with ways to easily replace VOC sensors, charge the battery and access different ports.The case dimensions were within our requirements, and the snap fit joints fit tightly together. We were also able to design and create custom PCBs to hold several subsystems together.



Conclusion: Our case is comparable in size to typical handheld devices, and enables us to easily access and replace various parts. We can easily update the design to accommodate new hardware components.



The 3D printed pieces of our first case design, and how they look assembled together

The second revision of our 3D printed case assembled together

The custom PCBs we designed. The power bus PCB (left). The button and DPAD PCB (middle). As well as the Sensor Interface PCB (right)

Interface The LCD With The MCU, Acquire And Store Sensor Data Through GUI Onto An External Memory Card ST.1 (COMPLETE)


The STM32 pin configuration with all peripherals


A snippet of the C program used to record sensor data and save it onto the external memory card


An image showing the SD card module and the boost converter to power it


A look at the sensor setup used only for the purpose of this test

Objective: Integrate multiple subsystems. Using the GUI, initiate the collection of VOC sensor data and have that data displayed on the UI as well as saved to an external memory card for future reference.


Setup: The LCD display was connected to the pins of the STM32 and configured within the IDE to properly communicate with the screen. The full GUI was written using ST’s HAL version of C, and implemented onto the STM32. Data inputs through the GUI are stored in the microcontroller. Communication between the LCD was using SPI1. VOC sensors are connected to the MCU’s ADC pins and the SD Card module is connected through SPI2, with an SD card inserted. Note the sensor setup in this test is not the same as Anthony’s standardized method, but this setup was used for the purpose of writing the programs to capture some form of sensor data. A boost converter was needed to supply the converter with 5V since the battery we’re using would be unable to power it if the user relies on it. The user initializes the SD card through the GUI settings page, where a csv file is created. The user then proceeds to initialize the test. Readings are simultaneously saved into the CSV file, as well as into a 2D array within the MCU.


Results: The STM32 successfully works with the LCD through a library I found and configured to work with our MCU series, the touch features require a separate library which I have not been able to find a working version of despite numerous attempts so we switched to a D-pad system. The SD card module works properly, and the VOC sensors are feeding voltage values into the ADCs to be displayed on the GUI. Overall, all subsystems work independently as well as together, VOC sensor data is being collected and saved accordingly on to the past results page as well as the SD card for later viewing on another device.


Conclusion: A core feature of the device works properly, future development efforts can be put towards improving the user experience. Multiple subsystems have proven to be able to coexist together without any issues arising.



A look at how acquired sensor data is displayed through an early version of our GUI and a look at the previous 5 sensor reading being saved for viewing through the GUI

Some sensor data recorded through the GUI and saved onto the SD card, viewed on Excel

Implement ML Algorithm Trained Using Data Acquired From The VOC Sensors ST.2 (COMPLETE)


An example of a successful validation of one of our earlier iterations of our algorithm

A snippet of the C program function, MX_X_CUBE_AI_Process, used to process sensor data and output a classification onto the GU

Objective: Upload the machine learning algorithm onto the microcontroller and integrate it with the other subsystems so that it is able to take real time sensor data as inputs and output a classification on the GUI.


Setup: We trained the ML algorithm using sensor data acquired, until we had reached our target accuracy level of 75%. Then using ST’s Cube AI software package, we imported and validated the algorithm. After the algorithm had been uploaded, we modified the IDE generated function MX_X_CUBE_AI_Process, to take the sensor data array as its input, run the algorithm, generate an output, and then display the results on our GUI. The function is called at the end of each sensor data collection interval so the classification is dynamic and is updated each time sensor data is acquired.


Results: The algorithm has been successfully implemented. Results are displayed on the GUI, and continuously update each time new sensor data is acquired. We were able to integrate a major component of our project with the other subsystems to form a cohesive system. When used in a non-controlled testing environment, such as without the mason jars, the output is still reliable. Running a test in open air showed that our algorithm was easily able to determine the classification. When a sample of Germ-X was introduced, the air classification percentage went down and the Germ-X classifications began to rise. The Germ-X classification percentage was significantly larger than the other two classifications.


Conclusion: By going through the process of creating and implementing a machine learning algorithm, we have demonstrated that our device can be used for classifying VOCs. The ease of which new algorithms can be implemented now that the framework has been created shows that our device could one day be used to run more advanced algorithms.

When exposed to open air the algorithm correctly identifies air as the classification, this is displayed on the GUI (Left). When exposed to Germ-X, the algorithm correctly identified it as the most likely classification after a minute, this is displayed on the GUI (middle). When exposed to Equate, the algorithm correctly identified it as the most likely classification albeit with lower confidence (right).

Run The Device Over A 1 Hour Period To See Power Consumption ST.3 (COMPLETE)


Device setup with all peripherals attached, powered via the battery


USB Multimeter used to conduct the test


Multimeter used to conduct the test

Objective: Determine how much power the system, with all peripherals attached, consumes over a period of time. From there, determine how long the 1200mAh battery will last.


Setup: A USB multimeter was connected to our device, which was powered via its USB port, and ran for a duration of 1 hour with the sensors. At the end of each testing period the readings on the multimeter were noted. The device was then powered via it’s battery pack and the current was measured using a typical multimeter

Results: With all peripherals attached and powered via USB, the device draws on average 216 mA of current. It consumed 216mAh over 1 hour. Which would mean it could in theory, run for about 5 hours 30 minutes. When powered via the battery pack, the multimeter revealed that it consistently consumed 215-217mA of current, which would in theory make the battery last about 5 hours 30 minutes also.






Conclusion: The device consumes the same amount of power whether it’s powered via battery or USB. It consistently drew about 216mA of current for the 1 hour testing period so we can be confident that from a full charge. The battery could power the entire device for over 5 hours. This meets our engineering requirement of having the device being able to be powered for a minimum of 6 tests, especially if we assume that a single test consists of one iteration of our standardized testing procedure.


Total mAh reading and instantaneous current reading on the USB multimeter when powered via USB


Instantaneous current reading on multimeter when powered via the battery


Device Measurement And Weight ST.4 (COMPLETE)


The scale used for this test

Objective: Measure the dimensions and weight of the constructed device to verify it’s of a size comparable to existing solution and is indeed portable.


Setup: A ruler was used to measure the device's dimensions, and a scale (Fig. 62)to measure its weight. The device, complete with all components, was placed on a scale and weighed. The resulting weight will be advertised as the weight of the device. A similar process was used for determining the device’s dimensions.


Results: The device’s dimensions and weight is within our specified parameters, and overall similar to that of a typical handheld device. The weight was 373g and had dimensions of 7 inches long, 5 inches wide, and 2 inches tall.


Conclusion: Our device is comparable in size and weight to a typical handheld device and within our engineering requirements.

System Testing and Verification ST.5 (COMPLETE)


An inside view of the assembled device

An side view of the assembled device

An overview of the assembled device with the lid closed

Objective: Ensure that all subsystems are working properly both independently and together. Assess any issues and make adjustments to fix said issues.


Setup: Setup our system: the entire device within the 3D printed case, all subsystems connected, and test to see if it’s working properly.


Results: The system worked properly, mainly because each subsystem was designed with the foresight to work with one another. Each subsystem was tested to verify our device was functioning as it should, and each subsystem worked as expected.



Conclusion: Our verification test has shown that the system has been successfully been designed and implemented, all subsystems properly work and interact with each other as needed. This test successfully shows the viability of our project as a solution to the problem we had hoped to solve.


The completed device powered on