Milestone 2
Plan
General Plan:
Understand aspects of lidar that shall be used in our device.
Test different lidar sensors to see if they work properly.
Test Motors
Build Schematic for device.
Build the belt.
Timeline (Weeks 1-12 in past tense, Weeks 13-15 in future tense):
Week 1 (9/3 - 9/9): Conceptual planning, reviewed last semester's progress.
Week 2 (9/10 - 9/16): Began looking for materials for prototypes. lidar sensors put on our radar as a possibility.
Week 3 (9/17 - 9/23): Began first round of material/sensor selection; brainstormed new product/company names
Week 4 (9/24 - 9/30): Redefined our sensor and CPU criteria. TOFSense models for lidar begin to be researched, with 'P' being the standout.
Week 5 (10/1 - 10/7): Found the example code and wiring guides for the TOFSense sensors. Met with lidar expert. Found similar project examples to study from. Interviewed David Mo, a visually impaired student, to get project validated and find out more about what our project could accomplish.
Week 6 (10/8 - 10/14): TOFSense P acquired and testing code begins. Milestone 1 completed.
Week 7 (10/15 - 10/21): Arduino UNO R4 acquired and continued testing code and working through the poor given documentation. Ordered motors. Used an LED to test ability to modulate signals.
Week 8 (10/22 - 10/28): Attempted to order a cable to configure sensor on PC. Cable arrived broken and first few motors were not what we ordered, so we needed to order more as well as another cable.
Week 9 (10/29 - 11/4): Used working cable to configure sensor. Correct motors arrived. Learned about TOFSense-M, which operate on a matrix as opposed to a single point like TOFSense P, and ordered several.
Week 10 (11/5 - 11/11): Performed first test of motor with sensor. Taped breadboard and sensor to wall and tested with rolling chair as preliminary test of sensor distance.
Week 11 (11/12 - 11/18): Created and tested sensor distance using mobile rig, verified that the sensor data was accurate. Matrix-form sensors arrived.
Week 12 (11/19 - 11/25): Began testing matrix-form sensors. Cable for connecting sensor to other hardware broke, had to order cable kit. Worked on polishing Milestone 2.
Week 13 (11/26 - 12/2): Continue to test the matrix-form sensors, figuring out how to get data from multiple more efficiently.
Week 14 (12/3 - 12/9): Work through problems that we almost certainly will encounter in week 13. Begin prototyping the belt and physical aspects. Document code and progress.
Week 15 (12/10 - 12/16): Present project progress to class.
Beyond: Maybe see if David Mo is able to do a voluntary minimal test. Do a trip to St. Joseph's School for the Blind early in the spring.
Task Breakdown:
Software lead: creates original and leads in modifying existing Arduino code
Aidan Rudd
Hardware lead: researches and compares sensors and the tools and architecture around them,
Mathew Bairstow
Testing (Record Testing Data/Design): creates test plans and creates testing environments
Jeffrey Tharakan
Philip Mascaro
Jett Tinik
Meeting Minutes: manages write-ups of meeting events
Jett Tinik
Concepts
Sensors
The team aimed to streamline the selection of a sensor for our product, considering various options with distinct advantages and drawbacks. Our focus was on encouraging factors like data range, low power consumption, and high frequency. To aid in the sensor selection process, we utilized a Decision Matrix (see next section). Upon completing the matrix, we concluded that the lidar sensor best met our essential criteria.
Following the selection of our lidar Sensor, we began planning the placement of each sensor on the belt and contemplated the field of view (FOV) they would offer (see Figure C1). Initially, we assumed the sensor had a 63-degree range both horizontally and vertically. However, further examination revealed that the horizontal range was limited to 45 degrees (see Figure C2). The accompanying Desmos plots illustrated how the sensors would perceive the surroundings on our belt. It's important to note that these representations do not factor in the dead zone and other considerations that shall be incorporated into the code.
Belt
Technical Belt Design
Following the initial testing of a single sensor and utilizing our field of view reference from Desmos, we commenced creating a hand-drawn design of our belt. This design serves as a guide for testing and crafting the belt. While adaptable, we consistently refer to this visualization during the testing phase.
Fig. 2. Depicts a hand-drawn design illustrating the arrangement of sensors on the belt and the inclusion of a front section on the fanny pack for housing the power source and CPU.
Fig. 3.1. Field of view of eight equally-spaced sensors where each sensor has an FoV of 63° (assumes a person is a circle)
Fig. 3.2. Field of view of eight equally-spaced sensors where each sensor has an FoV of 45° (assumes a person is a circle)
Concept Selection
Choosing the Sensor
Based on our decision matrix and initial testing, the group decided that this was the best way to proceed with our product development.
Designs
Process Flowchart
System Model
Analysis
Hardware Requirements:
The belt shall be made from two pieces of nylon stitched together; allowing a hollow channel through the belt where wires can be run.
The inside of this channel shall likely be lined with a slightly harder material, like synthetic undyed leather.
The belt buckle is a side release buckle; a shopping cart seat buckle. This is to make it as simple and tactile as we can.
Lidar Sensors, currently 8 of the TOFSense-M, manufactured by Nooploop.
Motors, currently using replacement phone vibration motors since they're used to working in enclosed working spaces and have the power to still be felt easily.
Arduino Microcontroller, currently the UNO R4 WiFi. A pin expansion board or a board with more pin headers shall likely be explored in the future.
Software Requirements:
Acquire a diverse set of data rapidly using our sensors.
Data shall be parsed quickly and accurately
Code shall be efficient (to allow for quick data parsing, saving power, etc.)
Be able to use cascading to interpret a range of data and associate it with a specific sensor(to then alert the proper motor)
Enhance the code by incorporating specific scenarios where data is interpreted differently, such as handling dead zones, detecting small objects, and situations where an object is detected for only a brief period.
Programming language used to set up our board.
Code and wiring instructions for setting up our initial design that we iterate upon as the project progresses.
Program provided by sensor manufacturer (Nooploop) for configuring sensor and previewing data, used for confirming that our code is accurate.
Our GitHub page documenting our progress.
TABLE I
SPECS FOR TOFSENSE-M
Test Plan
Our first round of testing was planned and accomplished during the month of November. The goal of the first test was to get our single point sensor, the TOFSense P, to stop giving relative numbers for its distance values and giving real world values.
The plan was relatively simple: mount the sensor and electronics to a moving, stable platform at a 90 degree angle relative to the ground, so that the sensor is looking outwards (see Figure E). The platform is moved towards a wall across a measured distance. At a few selected points, we'd stop the platform and note the tape measure's distance from the wall compared to what the sensor measured from the wall. Once this was done, we could find on average how incorrect the sensors were measuring and find a factor to multiply against the measured sensor's value. The result was our single point sensor now displaying distances that are much more accurate, often only a few centimeters off compared to the tape measure's value. We also added a function to allow a motor to be plugged in to the controller and vibrate harder the closer to the wall it got and it did so effortlessly.
Now that we have found success with the solo single-point sensor and motor, we're changing to a new sensor type and more of them. During our time testing, we found that our sensor wasn't as easily controllable as we wanted, and partially assumed this was due to single-point sensor being designed for low light environments with a pretty low FOV. We found a new sensor model from the same manufacturer, called the TOFSense-M, that's designed for more general lighting conditions and with a higher FOV, with the only major difference being its scanning type. Instead of being single-point, the new sensor operates on a matrix grid.
Going forward, we shall first figure out how to read, print, and parse through the raw hex data the sensor gives us, allowing us to be sure all values given are accurate and can be pulled out and utilized how we need it to. We also shall test with multiple sensors through the manufacturer designed way of wiring multiple sensors together, called "cascade" and learn more about its functionality for us going forward. Unfortunately, almost all information on cascade is unobtainable for whatever reason, so it is going to be a lot of blind guessing and puzzling out how to use them. Once we get understandable data from the sensors, we shall be sure that the data is scaled to real world numbers, likely through similar tests as the single point.
Fig. 4. A trolley with angle brackets holding a piece of cardboard with an insert for the TOFSense P sensor, connected to our Arduino R4 Wi-Fi board, which is connected to a motor and a laptop.