Communication Device by the Yarrows: Final Documentation
Contents
Introduction
Our communication device for Steve was built as our final project for the class 60-223 Introduction to Physical Computing. For this final project, the physical computing class met with clients with traumatic brain injuries and we were to make an assistive device for them to help with their everyday needs. For us, team Yarrows, we met with Steve, whose injury cases him to have trouble with mobility and speech. He lost the use of his left hand and arm, which used to be his dominant side, and he has trouble physically speaking the sentences that he has in his mind. He knows the words he wants to say, his mouth and vocal cords just refuse to cooperate. With this in mind, we set out to make something that could help Steve talk a bit faster and that could be easily used with just one hand.
What We Built
Product Description
The device has a screen that displays text in page format. The first page is a list of first halves of sentences, which Steve can select from. Upon selecting a first half, a corresponding second half list will show up. After selecting the second half, the device will then speak the full sentence out loud for him via a built in speaker. The device has a custom controller system that is designed after retro video games, with a joystick to scroll through the sentence options, a select and back button, a power button, and a volume knob for the speaker. The device is powered by a battery which has a compartment with a removable door to change the battery as needed.
Featured Image
The final product of our communication device for Steve.
Demo Video
Working demonstration video. Note that the screen in the video looks like it is flashing and not showing a stable image, however this is just due to the weird camera effects of recording a screen. The screen actually works just fine and can be seen in images in sections below.
Detail Images
Front View - Shows the device the way Steve sees it when he uses it.
Top Down View - Shows the two buttons, select and back, with the power switch, the volume knob, and the joystick. Also shows "Pitt is the shit" engraced below the screen, which is Steve's favorite phrase.
Side View - Shows battery compartment with removable panel for access to change the battery when needed.
Back View - Shows the speaker which would be facing toward whoever Steve is communicating with.
View with the Wheelchair Mount - Shows the mount we made to attach the device to Steve's wheelchair so he can easily use it with just his one hand whenever he needs it.
Narrative Description
Steve is talking to Lamar and finds that he suddenly needs to use the restroom. His vocal cords seize up on him and he can't manage to get the words out fast enough so he turns to the communicative device. He navigates the screen using the joystick to scroll through the options. First he picks "I need", then "to use the bathroom". He hits select on the second page of options and his voice comes out on the speaker saying the line. Lamar hears this and nods in understanding, then helps Steve navigate to the nearest bathroom.
How We Got Here
Prototypes
For these prototypes, we were asking the following questions:
What is the most intuitive to use between the two controller designs?
How is the positioning and actual display of the screen?
Is the first half/second half sentence choosing process easy and convenient?
We created two physical prototypes, each with a very different design for the controller. Prototype 1 has a controller that has the pointer, middle, and ring finger each rest on a button that is for select, scroll up, and scroll down. Prototype 2 has a controller that is just a joystick for scrolling with a select button on top for selecting. Both prototypes utilize the same screen configuration which we used a series of paper cutouts with the different selection lists on to "wizard of oz" the experience while Steve tested the controllers. Additionally, we set up the screen to just show an example of the first selection list for Steve to see the display, and we made an example wheelchair mount to connect the device to the wheelchair itself.
Prototype 1: Features a design that is easy for a hand to rest on and requires minimal movement to press the buttons that the fingertips would already be laying on. Three buttons for scroll up, scroll down, and select.
Prototype 2: Features a joystick to scroll through the list with and a select button that is on top of the joystick.
A demonstration video of how the hand rests on the controller for Prototype 1 and the ease of having each finger press its own button.
A demonstration video of how the joystick is used to scroll up and down on the selection list for Prototype 2 and the button on top is used to select.
Screen Display Demo: We programmed the screen to just show the first list with one option highlighted to see if the text size and contrast made reading clear for Steve.
Wheelchair Mount Prototype: A metal bar with diagonal support in which the vertical post would be fixed to a metal bar on the wheelchair and the horizontal post would be fixed to the device.
Two images of Steve trying out Prototype 2 while we hold the device up to the wheelchair and test different mounting locations.
Steve is trying out different joystick and button combinations to see which one he likes the best.
During the early stages of our prototyping process, we gained many insights that caused us to alter our initial prototypes. These insights came from a combination of feedback from our client Steve and issues that arose while testing our prototypes.
The feedback we received from Steve caused us to make changes that included trading the hand-shaped button configuration in our first prototype for a joystick as it closely resembled the joystick Steve used to control his electric wheelchair, using a larger screen that could display larger text as Steve has difficulty seeing small objects on screens, designing a detachable wheelchair mount for the device to prevent Steve from having to carry the device around, allowing him to transfer it to other wheelchairs, and building the whole device out of acrylic as Steve liked its texture.
Some changes that came about during prototype testing included placing the joystick and buttons used to control the device close together on the top of the to make them easy to access and not obstruct the wheelchair, adjusting the angle of the screen to reduce glare, and adding a back button to allow Steve to undo a selection when selecting a phrase.
Process
We started by doing the wiring for the OLED screen based on the setup shown on the Adafruit website.
We got the OLED screen to show the first list of first halves of the sentences as desired. The screen has a black bar going through it just because of the issue with seeing it through a camera lens, it was working just fine.
Configured a joystick with the screen and added a scrolling feature through the list.
Separate from the screen and joystick circuit, we made a circuit for the DFPlayerMini and the speaker using some random sample audio files on one of our computers to test it.
The final CAD design for the box that would house the electronics of the device.
Both circuits side by side after adding a select button to the screen/joystick circuit and a potentiometer to the DFPlayerMini/speaker circuit for volume control.
The final circuit condensed so it only uses one Arduino and minimal protoboards.
The final circuit after soldering. Some components have also been mounted into the top panel such as the buttons and power switch.
We mounted all the parts that needed to be mounted and glued the sides of the box that could be at this time. We purposely wanted to leave the circuitry accessible while we debugged the system because our selection process was not quite working as planned.
We finally got the code to work for the screen and were able to finisht putting together the box just in time. We were not able to implement the code for the DFPlayerMini, speaker, and potentiometer volume knob unfortunately.
Process Retrospective
The moment when we finally managed to get the screen to scroll downwards correctly was a big moment for us. The OLED screen was consistently more difficult than expected to interact with and there was a lot of boilerplate code we had to write just to get it to work. It was a major source of frustration for us in the early days getting the OLED to behave how we wanted and it didn't really start doing everything we wanted it to do until the very end. We also should have worked on the speaker/audio portion for the project much earlier. We were told repeatedly that sound is notoriously finicky to work with, but after we managed to get a demo with basic function working, we thought that it wouldn't be too hard to integrate it into the final code. That was a mistake. Things broke immediately upon integration with no clear reason why and because we had left it out until near the end we had no time to troubleshoot to try and fix it. We also forgot to make a panel for access to the mp3 player so that the user could swap out the sound card. In general, we should have just allocated more time for various tasks than we did. Everything took so much longer than we thought it would. We didn't look at the Gantt chart we made at all after we made it so we were probably not on schedule after the first prototype critique. Up until the morning of the final critique, we were working on it and trying to make things work or work better.
Conclusions and Lessons Learned
Addressing Feedback
Feedback 1: "I would recommend have the corners rounded so that the device won’t hurt the user when the wheelchair bumps or moves around."
Our Response: When we were still fabricating the enclosure for all the electronics, we noticed how sharp the edges and corners were and we did want to address it. Our initial plan was to sand down these sharp edges into a smooth surface, but we ran out of time and we didn't manage to get to it.
Feedback 2: "The audio function doesn’t yet work. Should have a text readable backup."
Our Response: This was also part of our initial design when we were still sketching things out. We had two screens, one tilted towards Steve so that he could read and select the options he wanted to use, one tilted away from him so that the person he's talking to cold read what he's outputting. This second screen wouldn't show the options that Steve is scrolling through, only what he chooses to form his sentences to minimize cognitive load. Originally, we had this as an option to choose between the additional screen or the speaker, and we chose speaker, but it would have been the most robust if we had just implemented both.
Feedback 3: "Ideally it would be smaller."
Our Response: Yes, the final product ended up a lot bigger than we really wanted it to be. It's pretty clunky as is, and if we had more time or if we decided to come back to this project, we will definitely be looking to scale it down and try to get it as small and as convenient to use as possible. Additionally, a mistake I (Carmyn) personally take responsibility for is how unnecessarily long our wires are inside the device. For my personal project before this one, I made the wires too short and didn't want that issue to show here too, but a happy medium would have been preferred.
Feedback 4: "Being able to add or modify language would be ideal."
Our Response: Yes, we really like this idea of being able to customize this device to suit the user's own needs. With the Uno, we don't really have the option to have an extensive database of vocabulary to cover most if not all of the daily sentences someone might use to communicate. If we used a platform with bigger memory, this is definitely something we would consider implementing. In it's current form, we are just about at max memory capacity. However, it would have been good for us to include maybe another removable panel into the design to access the SD card for the DFPlayerMini.
Reflection
Working with a person with a disability, especially of Steve's particular kind, was very eye opening. Before going in, we did know that we were likely to have some difficulties communicating with Steve, but we only got the full extent of it once we actually talked with him. He was really struggling to get his words out near the end of our conversation and he was growing visibly frustrated with himself. That feeling of utter helplessness where he knew what he wanted to say but just could not say it despite his best efforts was terrible to see, which was a lot of the reason why we settled on an assistive device that would help him communicate with people, hoping to relieve some of that frustration from him.
Concluding Thoughts
We should have spent more time on programming, especially earlier on. We were able to get all of the individual features working fairly early, such as the display itself, scrolling feature, speaker, and volume knob. This led us to (falsely) believe that implementing them all together later on would be straightforward. However, this ultimately ended up being the most difficult part of the whole project. Getting the corresponding lists to show up properly when selecting different items in the first list was incredibly challenging and led to much sleep deprivation. This part of the code only really started working the morning of the final presentation. We were not able to get the speaker with the prerecorded audio files speaking the sentences to be a part of the final code in time at all. We really wish we had spent more time on this earlier on.
The Yarrows - Carmyn, Nnenna, and Allen - with their client Steve, who is using the communication device.
Technical Details
Schematic
Block Diagram
Code
Final Code
/*
Final Project - Communication Device for Steve
Allen Zhu, Carmyn Talento, and Nnenna Nwaigwe
The final project was a communication device for Steve that would allow him to speak more quickly and more easily than he currently can, and also be easy for him to use. The form that took was a screen with a few sentence starter options; selecting one would take him to another page where he could complete the sentence fragment. The selection is handled by two buttons, one to select and one to go back.
Sources:
Inspired by https://www.dfrobot.com/blog-
Created with the help of ChatGPT and the Adafruit_SSD1325 library demo.
Pin Mapping:
Arduino pin | role | details
-----------------------------
2 input select button
4 input back button
8 output OLED DC
9 output OLED reset
10 output OLED CS
11 output OLED MOSI
13 output OLED CLK
A5 input DIR1
*/
//importing libraries
#include <SPI.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1325.h>
// Pin definitions
const int SELECTBUTTONPIN = 2;
const int BACKBUTTONPIN = 4;
const int DIR1 = A5;
const int MARGIN = 150;
const int POSITIONTIMER = 300;
// Initialize OLED display
#define OLED_CLK 13
#define OLED_MOSI 11
#define OLED_CS 10
#define OLED_RESET 9
#define OLED_DC 8
Adafruit_SSD1325 screen(OLED_MOSI, OLED_CLK, OLED_DC, OLED_RESET, OLED_CS);
// Structure for menu items
struct Menu {
const char* items[6];
int itemCount;
};
// Define selection menus
Menu menus[] = {
{ { "I need-", "I want to-", "Can you-", "What-", "Why-", "I feel-" }, 6 },
{ { "a break.", "food.", "water.", "the bathrm." }, 4 },
{ { "watch ftbll.", "play games." }, 2 },
{ { "watch TV?", "help me?", "talk to me?" }, 3 },
{ { "is that?", "can I do?" }, 2 },
{ { "am I here?", "Why?" }, 2 },
{ { "happy.", "sad.", "angry.", "afraid." }, 3 }
};
//declaring menu manipulation variables
int currPage = 0;
int index = 0;
int shift = 0;
unsigned long position = 0;
void setup() {
Serial.begin(9600);
// Initialize screen
screen.begin();
screen.clearDisplay();
screen.setTextSize(2);
screen.setTextColor(WHITE, BLACK);
updateDisplay();
// Initialize buttons
pinMode(SELECTBUTTONPIN, INPUT_PULLUP);
pinMode(BACKBUTTONPIN, INPUT_PULLUP);
}
void loop() {
// Reads button and joystick input
int dir1 = analogRead(DIR1);
int selectButtonPressed = !digitalRead(SELECTBUTTONPIN);
int backButtonPressed = !digitalRead(BACKBUTTONPIN);
// Handles select button press
if (selectButtonPressed && currPage == 0) {
currPage = index + 1;
index = 0;
shift = 0;
updateDisplay();
}
// Handles back button press
else if (backButtonPressed && currPage != 0) {
currPage = 0;
index = 0;
shift = 0;
updateDisplay();
}
// Handles joystick movement
if (millis() - position >= POSITIONTIMER) {
if (dir1 > 500 + MARGIN && index > 0) {
index--;
position = millis();
if (index < shift) {
shift--;
}
updateDisplay();
} else if (dir1 < 500 - MARGIN && index < menus[currPage].itemCount - 1) {
index++;
position = millis();
if (index >= shift + 4) {
shift++;
}
updateDisplay();
}
}
}
// Handles menu pages
void updateDisplay() {
screen.clearDisplay();
int displayCount = min(4, menus[currPage].itemCount);
// Iterates through items on current page
for (int i = 0; i < displayCount; i++) {
int currentIndex = i + shift;
if (currentIndex >= 0 && currentIndex < menus[currPage].itemCount) {
if (i == index - shift) {
screen.fillRect(0, i * 16, screen.width(), 16, WHITE);
screen.setCursor(0, i * 16);
screen.setTextColor(BLACK);
screen.println(menus[currPage].items[currentIndex]);
screen.setTextColor(WHITE);
} else {
screen.setCursor(0, i * 16);
screen.println(menus[currPage].items[currentIndex]);
}
} else {
screen.setCursor(0, i * 16);
screen.println();
}
}
screen.display();
}
DFPlayerMini & Speaker Code
This portion of the code never made it into the final code, but we felt it was important to include because it shows the working features of the DFPlayerMini, aka the mp3 player that had the audio recordings of the sentences on it, with the speaker and potentiometer as a volume knob.
Final Project - DFPlayerMini working example code
Allen Zhu, Carmyn Talento, and Nnenna Nwaigwe
This part of the final project code is for the audio portion of the device that we didn't manage to get to work.
Sources:
1462.html by DFRobot Feb 26 2020
//Additions made by Just Baselmans https://www.youtube.com/justbaselmansYT Jan 23 2023
#include "SoftwareSerial.h"
#include "DFRobotDFPlayerMini.h"
// Initialize software serial on pins 5 and 6
SoftwareSerial mySoftwareSerial(5, 6); // RX, TX
DFRobotDFPlayerMini myDFPlayer;
String line;
int potVal = 0; // potentiometer value
int volume = 0; // volume value
const int VOLUMEDELAY;
void setup() {
//set up pins for potentiometer
pinMode(A0, INPUT);
pinMode(3, OUTPUT);
// Serial communication with the module
mySoftwareSerial.begin(9600);
// Initialize Arduino serial
Serial.begin(115200);
// Check if the module is responding and if the SD card is found
Serial.println();
Serial.println(F("DFRobot DFPlayer Mini"));
Serial.println(F("Initializing DFPlayer module ... Wait!"));
if (!myDFPlayer.begin(mySoftwareSerial)) {
Serial.println(F("Not initialized:"));
Serial.println(F("1. Check the DFPlayer Mini connections"));
Serial.println(F("2. Insert an SD card"));
while (true)
;
}
Serial.println();
Serial.println(F("DFPlayer Mini module initialized!"));
// Initial settings
myDFPlayer.setTimeOut(500); // Serial timeout 500ms
myDFPlayer.volume(5); // Volume 5
myDFPlayer.EQ(0); // Normal equalization
// check which sentence parts were chosen
// select the audio files associated with the sentence parts
// play those two audio files stitched together
// alternatively, have audio files with all combinations and select the single audio file associated with the sentence combo
// for now, just play audio file 1
myDFPlayer.play(1);
}
void loop() {
// store the potentiometer value and map it to the volume
potVal = analogRead(A0);
volume = map(potVal, 0, 1024, 0, 30);
Serial.println(volume);
// have the potentiometer control the volume
myDFPlayer.volume(volume);
delay(50);
}
Design Files
Cut Files (.DXF): https://drive.google.com/drive/folders/1Yt6SaO-wq5MGS04OlaIkHX6waaO9KMDI?usp=drive_link
CAD File (.SLDPRT): https://drive.google.com/file/d/1yFxEaSkwXuUxNTeNbVHcav830ZPkolbf/view?usp=drive_link
CAD File (.STL): https://drive.google.com/file/d/1E6lk8YXXbGlnPKHYRH9StBZrmF9zN7g-/view?usp=drive_link