Current Research Projects

Summer 2023

Generative A.I. with bias reduction techniques

Generative Pre-trained Transformer (GPT), is a type of transformer based chatbot led by OpenAI, Anthropic, Cohere, Google, and many other companies. It uses machine-learning techniques to generate human-like text. Such transformer based NLP systems can generate paragraphs, write entire articles on a given topic, perform translation tasks, summarize long texts, and much more. GPT has achieved state-of-the-art performance on many NLP benchmarks and is considered one of the most advanced chatbots of its kind. However, like all artificial intelligence (AI) systems, these chatbots can be affected by bias. Bias can be introduced into these systems through the data they are trained on, this data could have biased opinions. As a result, it is important to carefully consider and address potential biases in NLP chatbots to ensure that they are fair, unbiased, and credible.


Diversity of Ideas in Higher Education


Many of us assume that liberal democracies encourage diversity of opinions and ideas.  This summer, we quantify how to study “diversity of ideas” in large historical corpus over the past century.  We want to understand whether the “space of ideas” has increased or decreased over long periods of time.  For this research, we will learn and apply the latest machine learning with document word embedding techniques.  Your job is to work with like minded fellow students to first identify appropriate text corpora (such as the Time Magazine, or local newspaper archives).  Over the course of the summer, we will dissect & analyze these texts to ask the question: did the space of opinions increase or decrease as we progressed through the past century?

Studying minimal conditions for reinforcement learning agents to learn optimal strategies in impartial games


It is well known that reinforcement learning algorithms for a learning agent can converge to the optimal strategy for impartial combinatorial games such as Nim.  This summer, we will study convergence conditions for groups (societies) of agents consisting of Q-learning agents among a few optimal strategy agents.  The big societal implications we would like to infer include: can a learning strategy make a difference to the welfare of the whole society?  As we abstract learning objectives about discovering “truths”, we would like to learn how to overcome misinformation with seeding techniques, and with interaction designs for a society of agents.

Investigate sources of algorithmic bias for A.I. / M.L. applications


Algorithmic bias is an increasingly serious societal issue.  It occurs when the outcomes of a software program are biased based on data collected or algorithms created by non-representative groups of humans.  Example: Amazon needed to scrap its "artificial intelligence" based recruiting tool because the selection was shown to be biased against women.  Twitter needed to remove an image cropping feature because of inherent bias against dark skinned people.  Other examples include search engine results and social media platforms.  All of these wonders of "AI" already have significant impact ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity.  In this research, we are interested to study algorithmic biases that reflect "systematic and unfair" discrimination.