In the spring of 2024, I took my favorite CS course to date by the amazing Professor Elena Glassman at Harvard. The course focused on providing a framework for how to think about designing usable and interactive systems from a software engineering perspective. How to facilitate inputs that are intuitive to the user, how to effectively communicate outputs, and how to expose computations like AI models to the user. In my final project with Edward Kang, a classmate and friend in the class, I combined these considerations with out project: Emojify.
Emojify is an AI-driven tool that helps users insert appropriate emojis into text messages. Emojis can be fun and goofy so we enjoyed focusing on this, but this project also has serious use cases with big benefits. Not only would this be helpful for anyone who wants emojis to automatically populate in their digital messages, this could particularly help individuals who have difficulty conveying their true meaning through text, including those with Autism Spectrum Disorder. If integrated with services like Apple CarPlay, this could also enhance the driving experience by letting users automatically add emojis to their voice-to-text messages.
This was a very fun project that let me flex my React muscles and integrate the OpenAI API into a project for the first time. Computationally, this project isn't terribly complicated and is essentially a fancy ChatGPT wrapper. However, the reason I'm proud of this project is precisely because of how it focuses on making the ChatGPT inputs and outputs work smoothly with the user experience. Everything from loading indicators to character counts, click away behaviors, tooltips, and more. It's the multitude of little things that I took into consideration which added up to make the experience feel smooth, intuitive, and professional. When classmates tested out the project, so much of the praise was precisely for the little features that they didn't even have to think about because we implemented them in such an intuitive manner. This is what a software engineer should do, and I'm grateful for this class for helping me practice that mindset. For a deep dive into everything from design to user tests on Emojify, our full paper is available below this summary. However, if you're short on time, then here's a TLDR:
Features:
Emojify: Automatically inserts suitable emojis into user messages.
Emoji Search: Allows users to find emojis by describing them in plain English.
Emoji Analyzer: Helps interpret the meanings of messages containing emojis.
Emoji Translate: Translates text into a language of pure emojis.
Implementation:
Used OpenAI’s GPT-3.5-Turbo for AI computation.
Focused on a simple user interface with customizable settings.
Added features like speech-to-text, loading indicators, and feedback options.
Technologies:
Chose React for its robust UI libraries, explicit state management, and JSX syntax over Svelte.
User Feedback:
Users really enjoyed the playfully intuitive design and suggested adding both text-to-emoji translation and explanations of Emojify's purpose.
Addressed feedback by enhancing UI elements and implementing new features.
Ethical Considerations:
Discussed handling harmful inputs and avoiding biased outputs.
Current implementation follows OpenAI’s guidelines to minimize biased responses.
Collaboration:
Nathan set up the foundation by building the flagship Emojify UI, incorporating OpenAI API, and adding Emoji Search.
Edward expanded with Emoji Translate, Emoji Analyzer, and a thumbs up/down feedback icon for outputs.
The semester has concluded and we were very happy with the A we got on this project, but we'd like to take this project further. Some things I'm interested in doing are publishing this project publicly and adding a proper backend so that users have the option to create accounts that save their history of prompts much like ChatGPT currently does.
As I update the project and add these features, you can find them at the GitHub repo here!