Your Portable Talking Enhancement Device to Help You Practice with Confidence
By Melp Meredith, Jay Davis, Rachhana Saish Baliga, Samikshya Satpathy
Ted is a smart, portable speaking coach designed to support users as they practice public speaking.
It listens while you speak, offers subtle guidance in the moment, and provides a clear performance summary afterward, all without judgement or pressure. Ted is especially helpful for students, early career professional, and non-native speakers who want to build their confidence in a low-stress way
Why Ted Matters
Ted was inspired by what we heard directly from users during our formative research:
"I only get feedback after the presentation. I want help while I'm still speaking"
That single quote captured a gap we saw across our survey and diary study responses that people need feedback while they're practicing, not just after. But they also need it to feel calm, helpful, and non-disruptive.
Ted fills that gap by providing gentle real-time guidance and clear post-session insights, creating a supportive practice experience that fits easily into everyday life.
What Ted offers
Practice with real-time visual feedback that doesn't interrupt.
Reflect with personalized post-speech reports
Choose their coaching style and focus areas
Build confidence over time through supportive, non-judgmental feedback.
Public speaking is one of the most common sources of anxiety, especially for students, young professionals, and non-native English speakers. While there are tools that offer feedback after a presentation, they often come too late to be helpful in the moment. Users told us they wanted something different:
"I wish something could tell me how I'm doing while I'm still talking so I could fix myself on the spot."
"I only get feedback after the presentation. I want help while I'm still speaking."
In fact, over 90% of the people we surveyed said they would find real-time feedback helpful but many also mentioned that current tools feel clunky, overly formal, or require practicing in front of others.
“I wish I could get feedback, but I don’t want to practice in front of people.”
“More user-friendly.”
We realized what people really need is a system that gives them gentle, live support while they speak and helps them grow over time without adding pressure or judgment. That insight became the foundation for Ted’s design.
"How might we support people during speech practice by giving them real-time, gentle feedback without breaking their flow?"
Idea and Prototype
Ideation
As a group, we knew we wanted to create something small and cute. We brainstormed a few options, like an anthropomorphic flower or animals like bunnies, cats, and dogs. We ultimately landed on a bear. The choice was a bit arbitrary since TED could really take any form, but it worked well that “Ted” could be short for “teddy bear.”
Design Decisions
Once we settled on the bear concept, we started sketching ideas. This ended up being the most challenging part. We debated whether TED should have an LED ring, a TFT screen, or both. We weren’t sure what would look right aesthetically or what would be feasible to implement.
Eventually, we chose to include both IoT components to make TED as accessible as possible. That led to another round of decisions: figuring out the size and placement of the NeoPixel ring and TFT screen. We spent several days narrowing down the dimensions and orientation of both devices. . Once we had a layout we liked, we moved into Figma to develop a wireframe.
What Worked and What Changed
Once we assembled a rough version of TED, we realized the combination of the TFT screen and NeoPixel ring actually worked better together than we expected. The LED ring added a layer of emotion or mood that complemented the visuals on the screen. Getting both to physically fit and look cohesive on the bear was a challenge, but the final version felt balanced.
We didn’t do formal user testing due to time constraints, but even showing it around informally, people immediately recognized it as a bear and found it cute — which was one of our main goals. We were surprised at how much personality TED had once everything was assembled. If we had more time, we’d definitely want to test how people interact with it, what they interpret from the light and screen signals, and how they feel about using it in context.
Ted is a portable, AI-supported speech coaching device that combines a built-in microphone, LED feedback ring, and small screen to deliver real-time visual feedback during practice, along with personalized reports afterward. Ted connects via Bluetooth to a companion app that tracks progress and lets users customize their coaching experience.
Our final concept is centered around three main goals:
Reduce anxiety and distraction by avoiding verbal interruptions
Offer timely, actionable feedback
Support different types of users with flexible coaching modes
Key Features
Ted uses a glowing LED ring and simple on-screen messages to guide users while they’re speaking.
Red = “Slow down”
Yellow = “Speak louder”
Green = “Nice pace!”
This non-verbal feedback helps users adjust naturally without breaking their flow.
After each session, Ted sends a clear, supportive report to the user’s phone. It includes:
Pacing and volume
Filler word usage
Clarity insights
Over time, these reports show patterns and progress, helping users reflect, adjust, and grow with each practice.
Before each session, users can tell Ted what they’d like to focus on – for example:
“Today, I want to work on my pacing.”
Ted then tailors its real-time feedback and post-session report to highlight that specific area.
Users can choose to focus on
Pacing
Filler words
Volume
allowing each practice to feel purposeful and aligned with personal goals.
Ted is a smart, portable speaking coach designed to help users build public speaking confidence through real-time guidance and post-speech feedback—all without judgment or pressure.
Ideal for students, early-career professionals, and non-native speakers, Ted offers a safe, low-stress way to practice and improve.
With future advancements in AI and speech analysis, Ted could become a scalable tool for everyday communication training—making confident speaking accessible to everyone, anywhere.
* The image is for reference purpose as to how we see TED being developed in future.
NLP and LLM technology to handle analyzing of speech data and to communicate with user verbally
Speakers and microphone to detect speech and communicate with user
Phone prototype would become an app that connects to TED via bluetooth or wifi
The vision is that in addition to the current features, there would also be computer vision for facial recognition and body gesture recognition to analyze those parts during giving speech, including posture, body positioning, and facial expressions.
Bringing Ted to Life
In our current prototype, we simulate TED’s behavior manually using Wizard of Oz techniques. In the real product, these simulations would be replaced by fully functional hardware and software. TED would include:
A high-sensitivity microphone array to capture the user’s speech clearly, even in noisy environments. These microphones would detect pacing, volume, filler words, and hesitation patterns.
A compact wide-angle camera placed in TED’s forehead or chest area to record facial expressions and gestures, allowing TED to interpret nonverbal communication like posture, eye contact, and energy.
An embedded processor capable of running lightweight machine learning models locally. This processor would handle real-time speech analysis, facial detection, and gesture recognition without needing to send data to the cloud.
A NeoPixel LED ring around the bear’s face or collar area to provide soft color-based feedback in real time.
A small TFT screen embedded in the belly or face to display text prompts or visual cues without being distracting.
Bluetooth and Wi-Fi modules for syncing with a mobile or desktop application, where users can access their feedback reports and track progress over time.
TED would no longer rely on manual input or scripted responses. Instead, it would autonomously recognize speech patterns and gestures, and respond with immediate and personalized feedback.
Vision for the Final Product
TED would operate through three main modes:
Practice Mode
Users receive real-time guidance through the LED ring and subtle screen messages. These visual cues would signal when to slow down, speak up, or keep a steady pace. The goal is to help users self-correct naturally, without losing focus or being interrupted.
Performance Mode
After finishing a speech, users receive a detailed session report through TED’s companion app. This report includes insights on clarity, pacing, tone, filler word frequency, and gesture use. Feedback is structured and nonjudgmental, encouraging reflection and growth without pressure.
Warm-Up Mode
Before practice sessions, TED provides light vocal exercises and confidence-boosting phrases. This mode is designed to help users feel grounded and ready. TED can also track stress markers in the user’s voice, such as pitch variability or breathing patterns, and respond with calming encouragement.
A Supportive Everyday Tool
TED is built for real-life scenarios. It is ideal for students preparing for class presentations, non-native speakers practicing pronunciation, and professionals refining pitches or job interviews. TED is soft, portable, and approachable. It is easy to use anywhere—at home, in the office, or while traveling.
In future versions, TED could integrate with mobile speech apps, AR environments, or even smart home systems to expand its utility. It could become part of someone’s daily routine, offering encouragement and accountability with a personality that feels friendly and familiar.
Limitations of Our Design
While Ted successfully simulates the user experience, some of its core features like speech processing and gesture tracking were Wizard-of-Oz–based. A fully functioning version would require deeper technical development and more testing across diverse user scenarios. We also haven’t yet explored how Ted might work in public or group settings.
Next Steps
To take Ted further, our team would focus on:
Building a real-time feedback engine with embedded AI
Expanding the mobile interface for user customization
Testing with broader audiences, including people with speech disorders or high communication anxiety
These steps would help transform Ted from a guided prototype into a fully operational product.
Insights Gained
What stood out most during testing was how much users appreciated encouragement not just feedback. Small affirmations like “Nice pace!” had a surprisingly positive impact. It reminded us that feedback systems don’t just teach, they also shape how people feel. That emotional layer became a bigger part of our design than we originally expected, and it’s something we’ll carry into future work.
Ted was designed to reduce pressure while practicing public speaking , but we recognize it may still pose challenges for some users.
Users with visual impairments may not be able to interpret the LED feedback or screen messages. People with limited motor function might have trouble physically handling Ted or navigating the companion app. Those who are both deaf and blind would face the greatest challenges, as the system relies heavily on both visual and auditory interaction.
To make Ted more accessible in future iterations, we would explore:
Multiple feedback options: visual, audio, and haptic
Customizable interaction settings
Screen reader compatibility
Co-design sessions with users of varying abilities
Building for accessibility means considering different ways of sensing, moving, and communicating — not just modifying outputs, but rethinking how support is delivered.