Our studies investigate a wide variety of questions from the language sciences using different tools, allowing visitors to experience something new at each station. The data and the feedback we get from museum guests are incredibly valuable to us as scientists... and extremely rewarding to us as people! We look forward to welcoming more researchers and research activities to join the Language Science Station in the future.
Have you participated in one of our studies at Planet Word? Find it here to see what you've helped us learn!
Learn more about our current studies here. Check back later to learn more about the results as our research teams collect and analyze the data.
Scroll down to see what we've learned from our past studies, where our data collection is complete.
This study asks how people make predictions about upcoming words and phrases depending on the topic of conversation and local sentence context. Participants play a game where they fill in the blanks of a text conversation after being presented with more or less of the preceding text thread. How readers generate expectations is a well-studied topic in the lab, but the move to Planet Word and to more diverse text message-based conversations has the potential to generate exciting new insights.
Research team members: Patrick Plummer (Howard Psychology)
This study is interested in exploring how different areas of the brain engage when we're listening to speech. In particular, we're looking at what's happening brain areas involved in language and attention there are competing things to listen to. We use the neuroimaging technique fNIRS (functional Near-Infrared Spectroscopy) to measure brain activity while participants listen to one story being played into one ear while ignoring a different story played into the other ear. The highly mobile, non-invasive fNIRS system presents a rare opportunity to bring neuroimaging out of the lab, so participants can watch (and take a selfie with!) their brain activity in real time.
Light up your Language Brain is especially interested in testing participants from all backgrounds (linguistic, age, sex, race/ethnicity, etc.) to investigate individual differences in brain activation. In particular, we want to understand how people of various language backgrounds (i.e., mono/bi/multilinguals) perform on cognitive tasks like this competing speech task, which involve what is called executive function. Most previous studies looking at executive function are have been conducted on homogeneous participants in lab settings, which is why conducting this study with Planet Word visitors is so important.
We're excited to be capturing the incredible individual variation in how we process cognitively demanding language-based information, and exploring brain-related patterns in real time!
Research team members: Gavkhar Abdurokhmonova (UMD Human Development), Rachel Romeo (UMD Human Development)
This study measures how people of diverse backgrounds rely on potentially imperfect translations produced by AI tools. Participants play a game where they help a character, Sofia, find a scarf she lost somewhere at Planet Word. The data collected will help design more trustworthy AI tools in the future.
The Language Science Station is proud to partner with the NSF Institute for Trustworthy AI in Law & Society (TRAILS) to present this study.
Research team members: Marine Carpuat (UMD Computer Science), Ge Gao (UMD Information Studies), Calvin Bao (UMD Computer Science), Dayeon (Zoey) Ki (UMD Computer Science), Marianna Martindale (UMD Information Studies), Yimin Xiao (UMD Information Studies), Yongle Zhang (UMD Information Studies)
This study investigates whether factors like person familiarity contribute to memory for conversation. Groups of visitors, some who know one another and some who don’t, compete in a team-based “reality show” where they take turns naming items to pack for a deserted island. Then they are given a surprise memory test asking what items were said, and who said them. Surprisingly, relatively little is known about memory for conversation, but the museum environment is a natural opportunity to collect this type of critical data.
Research team members: Charlotte Vaughn (UMD Language Science Center)
These are studies that we have run at Planet Word in the past. Some data analysis is still ongoing... watch this space for more updates!
Previous research has shown that when we process language, we actively predict what the next word will be, which helps us comprehend more efficiently and accurately. In Race the Robot, we wanted to better understand how the human brain makes predictions, and whether adults and children predict in the same way. We did this by asking participants to complete sentences as fast as they could, and we measured what they said and how quickly they said it. Our first version (1.0) had participants read the sentences on a screen, and our second version (2.0) had participants listen to the sentences.
What we've learned so far:
We have found that both adults and children are faster to produce responses in highly constraining contexts, even when the response they produce is not the most frequent response. This pattern supports a "race model" of prediction, where multiple candidate words "race" to be produced, and stronger competitors lead to faster winning times. People sometimes produce responses that are contextually inappropriate but highly tempting (e.g., This is the bee that the girl… stung), and seeing pictures associated with the context rescues adults, but not children, from producing those lures.
Race the Robot helps us understand what kind of mechanism underlies human word prediction processes and helps us better understand what types of information we can or cannot effectively use to generate expectations.
Learn more in our presentations and publications:
Research team members: Eun-Kyoung (Rosa) Lee (UMD), Katherine Howitt (UMD), London Dixon (UMD), Masato Nakamura (Saarland University), Tal Ness (UMD), Colin Phillips (Oxford, UMD)
E is for Expert invited Planet Word visitors to tell us about a topic they know a lot about, and a topic they know less about.
The purpose of E is for Expert was to learn about how people talk differently on topics they know a lot about (are an “expert” in) compared to topics they don’t know a lot about (are a “novice” in). The way we talk is shaped by our experiences. For example, a person who loves watching basketball will probably be able to say a lot more about what's happening in the game than someone who doesn’t care or know much about sports.
Then, to see if these differences are detectable by others, we asked both humans and large language models (i.e., ChatGPT) to rate language samples by similarity to see if experts sound like other experts, and novices sound like other novices.
What we've learned so far:
We found that humans and GPT-4 (but not GPT-3.5) were above chance in judging experts as more similar to other experts, and novices as more similar to other novices. Both were better at judging adult speech compared to child speech.
These findings suggest that experts and novices do talk differently from each other, and these differences are detectable by both humans and AI. This suggests that we may be able to infer what we do/do not have in common with other people just from the way we talk about certain topics.
Learn more in our presentations and publications:
Research team members: Yi Ting Huang (UMD HESP), Sophie Domanski (UMD HESP)
Guess the Sign invited Planet Word visitors to guess the meaning of signs in American Sign Language.
Sign languages show significant iconicity, using shapes and actions that reflect general human experiences. For instance, the sign for "toothbrush" may mimic its shape or the act of brushing. However, studies (e.g., Sehyr and Emmorey 2019) reveal that naïve observers struggle to guess the meaning of signs without prior knowledge. We aimed to replicate such studies while positioning our study in the museum setting to allow us to engage with a widely diverse population in age and other demographics.
What we've learned so far:
Our results replicated Sehyr & Emmorey’s 2019 results, regardless of the participant’s age (adult vs child) or their linguistic background (monolingual vs. multilingual). That is, out of the 425 signs we tested, only one, for “camera” was guessed accurately 100 percent of the time, and only 22 (5%) of the 425 signs were guessed with an accuracy of over 50% of the time. Ninety-five percent of our signs were guessed with less than 50% accuracy, with 166 signs (39%) with no accurate guesses.
This brings to light an important conversation about how iconicity, the resemblance between the sign and its meaning, understood in hindsight and present in most signs, does not translate to transparency, or the ability to guess the meaning before being told. It is very interesting that sign languages can have a great deal of iconicity while maintaining common complex properties of language present across modalities!
Research team members: Deanna Gagne (Gallaudet University), Laura Wagner (The Ohio State University), Desirée Kirst (Gallaudet University), Marjorie Bates (Gallaudet University)
Guess the Story invited Planet Word visitors to learn three signs in American Sign Language using one of two teaching methods, and measured how well they learned to produce the signs, and their attitudes toward sign language.
Previous research has shown that when a sign closely resembles its meaning (i.e., is iconic), it tends to be produced less accurately by hearing, college-aged learners (Ortega & Morgan, 2015). This finding clashes with a common belief that sign languages are easier to learn due to their iconic nature. We studied how different teaching strategies affect production accuracy and attitudes of non-signers.
What we've learned so far:
Participants learned ASL signs in two ways: iconic instruction (focusing on what the sign means) or arbitrary instruction (focusing on how the sign looks). We found that learners in the iconic condition were less accurate in production, especially in handshape. However, they felt more confident about sign language learning. Both groups considered sign language easier to learn than spoken language, but iconic learners had a greater change in attitude from the start to the end of the study.
Our results suggest that iconic teaching strategies boost learners’ confidence in their language abilities, while simultaneously negatively impacting their production abilities. Our findings caution against solely relying on iconic teaching strategies, suggesting a need for integration with other methods for effective instruction.
Learn more in our presentations and publications:
A 1-minute video summary of the project in International Sign for the SIGN10 conference (2024)
Poster presented at Formal and Experimental Advances in Sign Language Theory (FEAST, 2024)
Research team members: Kaj Kraus (Gallaudet University), Marjorie Bates (Gallaudet University), Desirée Kirst (Gallaudet University), Nikole Patson (The Ohio State University), Laura Wagner (The Ohio State University), Makayla Yake (The Ohio State University), Deanna Gagne (Gallaudet University),
When people experience concussions (the mildest form of brain injury), most experience symptoms that dissipate within weeks or months. However, some children experience diverse, prolonged deficits that can impact their communication and academic performance. Despite this, there is no single tool or framework that exists for screening English-speaking children for cognitive-communication difficulties after concussion. Our Language and Concussion study is a step towards building such a tool.
We first tested a number of children with concussions to identify language skills that they seemed to have problems with. Then, at Planet Word, we collected data from visitors using short games that captured those same skills. Our goal with the Planet Word data collection is to collect "norming data", or data from a wide range of participants who don't have concussions, to find out what the range of performance is like on our games when the brain is functioning normally.
It can be really hard to identify when a child has a concussion. There are some good tests for adults, but there aren’t tests for young children that capture their language. And children aren’t as adept at self-diagnosis, so we want to create a good tool for parents, schools, sports teams, and doctors to provide the right kind of support for children who need it once they’ve had a concussion, and to help identify when they are ready to return to the classroom and their friends.
We are in the process of analyzing our data... stay tuned for more information!
Research team members: Rochelle Newman (UMD HESP), Melissa Stockbridge (Johns Hopkins), Andrea Zukowski (UMD Linguistics)