Milestone 3

Milestone 3: Implementation, Test, Teamwork

3.1: Implementation

Hardware/Linux

Fig. 1. Early camera testing.

The team obtained a PIR motion sensor that detects if there is movement. If movement happens, the Raspberry Pi Camera Module is activated and a picture is taken.

Fig. 2. Python code for motion detection.

Fig. 3. The Raspberry Pi and PIR sensor, with connected pins highlighted in yellow.

The PIR sensor was connected to the 5V pin, the Out pin (input 4), and the Gnd pin.

Fig. 4. The Raspberry Pi connected to the camera.

The Raspberry Pi camera was connected via the camera port, at the center of the board.

Fig. 5. Closeup view of the PIR sensor and its potentiometers.

The PIR sensor has 2 potentiometers. 

The left potentiometer controls the sensitivity of the sensor. The right potentiometer controls the timeout threshold (2.5 seconds to 250 seconds)

Turning them counter-clockwise increases sensitivity and decreases output time.

Fig. 6. Measurements for the new PIR sensor.

Update: The team switched to a significantly smaller PIR sensor, as it displayed better results and will be more compact for the final product.

Cloud/Image Processing

Fig. 7. AWS S3 bucket.

The team set up an Amazon Web Services (AWS) organization and created an S3 bucket that acts as a storage hub for all of the files utilized. The script will upload the taken Raspberry Pi Camera image to the AutoBox S3 bucket. We then run commands in the AWS Command Line Interface (CLI). The CLI makes an API call to AWS Rekognition, which is an image processing service in AWS that will return a JSON file of both what the API detects as text in our image as well as labels for everything in the image, the latter of which should be a package.

Fig. 8. Python code for AWS Rekognition.

Enclosure

The team purchased a plastic enclosure for the Raspberry Pi. This included the primary enclosure for the Raspberry Pi, a segment for attaching the camera, and an adjustable mount. A WS2812 LED RGB ring light is attached to the top of the enclosure, next to the camera.

Fig. 9. Overhead view of the AutoBox enclosure. Note the camera and LED are not attached to the enclosure.

3.2: Test

Fig. 10. Early software testing.

The team tested with a NJ Transit ticket to see if the item and text can be detected. Fortunately, after much trial and error, the team was able to obtain JSON data regarding label and text detection. The camera cannot be blurry or else the text detection would not return valid results. The text parsing sometimes put ~40% confidence on text that was blurry in the image. The label also needs to be better detected by varying the angle at which the image is taken. We will test with a makeshift box to act as a chamber to decide where to place the camera. 

3.2.1: Results

Fig. 11. AutoBox output sent via SMS.

The results of the image processing and text detection sent to the end user. Further refinements must be made for positioning of the camera and item.

Why do we send via texts? Why not just photos? — Accessibility. The text will be much easier to see in a message that you can zoom in compared to seeing it through a photo.

- Delivery A label to inform delivery people will be put inside the PO box. It will instruct delivery people how to properly position the mail so that the camera can see it.

- Housing — The Innovation Expo demo will consist of our device inside an actual PO box. The device will be connected to a TV screen via HDMI and one of our teammate's phone will be the phone that we will show to people that the notifications work.

3.3: Teamwork

When it comes to collaboration, the team is continually getting stronger and more adept when it comes to collaboration and division of responsibility. For the prototype, the team members focused on hardware took the lead in terms of ensuring that the motion sensor was able to properly communicate with the Raspberry Pi. In terms of this process, while there were initially technical difficulties, multiple team members were able to pull together and ensure that the base functionality of the motion sensor and the Raspberry Pi were established. As for improvements regarding future collaboration, the team will focus on improvement of time management in order to address issues such as technical difficulties earlier. Making time management a priority will not only allow for more time and resource allocation towards bug fixing and troubleshooting, but it will also allow for more time to expand functionality of the device itself and all future prototypes.