Monday, June 9, 2025
A Q&A with Matthew Bui
Matthew Bui reflects on advancing data justice and building community partnerships through public scholarship.
Can you tell us a bit about your background and what led you to pursue a career in academia and public scholarship?
My academic career and public scholarship are deeply tied to my identity as the son of refugees and as someone who comes from a working-class, heavily immigrant neighborhood. These experiences shape my commitment to centering voices and perspectives that are often overlooked in research: mainly, to foreground and elevate underrepresented communities. I constantly ask and reflect on whose views and interests are reflected in a given digital platform or dataset, and what underlying structures shape a particular phenomenon.
For this reason, I see research—and education more broadly—as deeply connected to my work at the university. I focus on teaching and mentoring students to think critically and develop the language needed to analyze and address complex problems. But this work also extends beyond the university. Public scholarship and engagement are essential to my role as a scholar, whether I’m collaborating with community organizations to address research gaps or engaging public audiences to raise awareness about digital technologies and their impacts.
Your work centers on data justice, race, and technology. How do you define data justice, and why is it an important lens for understanding digital platforms today?
Great question. To me, data justice refers to a deep recognition of the harms and risks associated with data, both past and present, especially as data-driven technologies become more embedded in everyday life. Not only about acknowledging bias or harm in data and technology, but also about reimagining how these systems can be made accountable to the communities they affect, particularly those that have been harmed by them historically. Scientific racism, for example, speaks to a long history of data being weaponized against marginalized communities.
In response, my research often involves spotlighting and critiquing dominant assumptions that render certain populations less visible or less legible in elite or institutional spaces, such as low-income communities and communities of color. Yet, I also unearth and explore how these same communities can “innovate” and form new models for making and reappropriating data and technology to work for them. Whether it is a first-generation immigrant using digital delivery platforms to grow a small business, or a local organization documenting the erasure of longtime residents, these examples show how technology can be both harmful and liberatory.
As such, my work often seeks to expand notions of expertise within technology design, to really show the merits and importance of centering lived experience within design—and for design and data justice. Of note, I draw from Design Justice by Sasha Constanza-Chock to think about these latter themes.
You’ve been deeply engaged in community-based research, especially in Detroit. What has that work looked like, and how have local partnerships helped shape your approach?
In my Detroit-based work, I’ve been partnering with Michigan-based organizations that serve various sub-communities within the Asian American population through an OVPR Anti-Racism grant. This has included groups like APIAVote Michigan and ACA Detroit. In fact, recently, I conducted focus groups with these partners to better understand how Asian Americans are exposed to and experience misinformation and disinformation, which we refer to more broadly as problematic content online. This work was conducted in deep collaboration with the civil rights organization Asian Americans Advancing Justice | AAJC, including this recent report.
I’m also beginning to build partnerships with local organizations focused on design justice, with the goal of co-creating new and improved models for increasing participation in Reparative AI design. Although this project is still in its early stages, it has already been incredibly valuable to learn from local experts at the Allied Media Project, the Detroit Community Technology Project, and rootoftwo, among others, and to launch a Design Justice course based in Detroit.
One of your recent projects explores how online reviews can reflect and reinforce structural inequality. What can we learn from this kind of digital data, and what change do you hope it inspires?
One of the key takeaways from this project is the need to unpack our assumptions about what makes a platform intervention successful. That is, in this work, we show that digital platforms often have skewed user bases that tend to overrepresent white and wealthy users. As a result, even well-intentioned efforts, such as boosting the visibility of Black-owned businesses on Yelp, can lead to unintended consequences.
For example, we found that after receiving increased visibility, Black-owned businesses in Detroit and Los Angeles experienced a disproportionate decline in their online reputation compared to non-Black-owned businesses during the same time period. This suggests that cross-cultural frictions can emerge in online spaces, especially when not all platform users might know how to engage with the results of a given platform intervention.
Through this and related work, we can critically and creatively leverage digital data to answer important questions about how platforms can be designed, leveraged, or adapted to address racial and class inequality, rather than reinforce it.
Looking ahead, what excites you most about the future of your work or the broader field of information studies?
I’m really excited about how the field is rapidly evolving and ever relevant to the pressing conversations of our time. It keeps me (and my colleagues) very busy. As technologies continue to evolve, I see my work remaining focused on ensuring data and technology are designed with all communities in mind and used to serve the public good.