Research

Our research spans a wide variety of areas, from Natural Language Processing (NLP) to Computational Cognitive Modeling. Some of our current and prior lines of research are described below.

Inference-based NLP

Implication is related to how sentences are related to each other, specifically with respect to their inferential properties. Consider the sentence, “Two black cars start racing in front of an audience.” Given this sentence, you might infer that the sentence “Two cars are racing together.” is true. In other words, the second sentence logically follows from the first sentence. Also, the sentence “A man is driving down a lonely road.” is the opposite of what the original sentence says and thus, is definitely false. This area of NLP is called Natural Language Inference (NLI). Our work deals with NLI and how it relates to paraphrasing and misinformation.

Recent Publications:

Intersecting NLP and Psycholinguistics

Psycholinguistics is concerned with how language is represented and processed in the mind/brain. Our work is concerned with developing NLP models for predicting the psycholinguistic features of words, such as how easily a word can be visualized by a person (e.g. it is easy to picture what a dog looks like, but it is much more difficult to picture an action like running). We also examine parallels between the cognitive development children undergo and the “development” of a neural language model during training.

Recent Publications:

Automated Argument Analysis and Generation

If you have ever seen the comment sections on a political Facebook post, or a twitter debate over something controversial, you might notice one invariant fact: people are bad at reasoning. We use fallacies, we are affected by cognitive biases, we mischaracterize positions, and we often have misplaced overconfidence in poorly-thought-out views. Why is this the case? And most importantly, can we do better? Our work attempts to create AI-based tools that can understand argumentative dialogues, assess the quality of arguments people make, and offer suggestions on how to improve those weaknesses.

  • Warrant Game - People tend to be somewhat poor at evaluating whether an analogy is any good. For example: "Both coffee and alcohol change your mood when you drink them; both coffee and alcohol are liquids; both coffeeholics and alcoholics can't function until they've had their drink; alcoholics have a serious problem; THEREFORE, coffeeholics have a serious problem." Is this a good argument? Why or why not? How can we program an algorithm to reason in the way you just did?

We have developed a game / interaction framework called WG-A which allows people to evaluate an analogical argument without having to spend a lot of time understanding the deep theory behind what makes an analogy good or bad. WG-A makes it easier to determine key features typically associated with argument strength and may reveal hidden assumptions or fundamental reasoning incompatibilities. By presenting arguers with an issue and positions which relate to conspiratorial or biased thinking, WG-A may operate as an educational tool for breaking biased belief into core values and building cognitive skills. We are currently performing experimentation to test WG-A's effectiveness in combating belief in conspiracy theories about COVID-19 and in reducing anti-Black racial bias.

Related Publications:
Cooper, M., Fields, L., Badilla, M., & Licato, J. (2020). WG-A: A Framework for Exploring Analogical Generalization and Argumentation. In Proceedings of the 42nd Cognitive Science Society Conference (CogSci 2020).

Licato, J. & Cooper, M. (2020). Assessing Evidence Relevance By Disallowing Direct Assessment. In Proceedings of the 12th Conference of the Ontario Society for the Study of Argumentation.

Fields, L. & Licato, J. (2022). Combatting Conspiratorial Thinking with Controlled Argumentation Dialogue Environments. In Oswald, S. and Lewinski, M. and Greco, S. and Vilata, S., eds. The Pandemic of Argumentation: 23. Springer Nature.

  • Aporia is an argumentation dialogue environment designed to create structured datasets of opposing interpretive arguments. Players compete by arguing for or against certain interpretations of open-textured rules. The game is designed to be fun in order to incentivize willing participation and obtain useful datasets. Aporia is played in rounds by any group of three or more people. At the beginning of each round, two players are randomly chosen to argue against each other, and a third player is designated as a judge. The players are provided with an ethical rule for a given professional association and a scenario. For example, the rule ``teachers need to act professionally with students'' could be paired with the scenario ``some teacher exchanges some light-hearted jokes with a student during recess.'' Would the teacher's action be considered ``professional'' in the sense meant by the rule?

    Related Publications:
    Marji, Z. & Licato, J. (2021). Aporia: The Argumentation Game. In Proceedings of The Third Workshop on Argument Strength (ArgStrength 2021).
    Licato, J. (2022). Automated Ethical Reasoners Must be Interpretation-Capable. In Proceedings of the AAAI 2022 Spring Workshop on ``Ethical Computing: Metrics for Measuring AI's Proficiency and Competency for Ethical Reasoning".


Automated Theorem Proving

We developed the next version of MATR, a highly customizable natural deduction reasoner. Although this line of research is not currently active, we were previously able to show that MATR could develop concise proofs of both of Gödel's incompleteness theorems.

Related Publications:

Malaby, E., Dragun, B., & Licato, J. (2020). Towards Concise, Machine-discovered Proofs of Gödel's Two Incompleteness Theorems. In Proceedings of The 33rd International Florida Artificial Intelligence Research Society Conference (FLAIRS-33). AAAI Press.

Active Formalization

When it comes to things like morals, ethics, and laws, we normally have a set of rules that we understand and try to follow. These might be legal, or they might be moral principles (like the so-called "Golden rule"). But we sometimes find that these rules are not enough to tell us what the right actions are. In these cases, we have to reason about the purpose of the rules---the reasons the rules exist in the first place---and act based on that. Without that flexibility, it's difficult to be truly moral, and robots are no exception. Can robots ever carry out this kind of reasoning? This is a problem being worked on by Dr. Licato's Advancing Machine and Human Reasoning (AMHR) Lab, which is devoted to not only making robots smarter, but using these advances to make us better reasoners too. It is currently working on a $450K AFOSR Young Investigator's Program award, exploring the type of reasoning described earlier, called 'active formalization.'

The loophole task - One example of active formalization can be seen in what we call the "loophole task." Imagine you make a bet with your friend, and agree that the loser needs to buy breakfast for the winner. You win the bet, and the next day, your friend shows up with single Cheerio, which he pulls out of his pocket. Did he buy you breakfast?

Of course not, you might say. So you and your friend agree that a satisfactory breakfast involves at least 1200 calories of food (you like a big breakfast). The next day, he shows up with $20 worth of uncooked, rotten bacon. Again, you say, he hasn't kept up his end of the deal. So you go back to the drawing board, and you both agree that the food has to be the kind of thing that a respectable restaurant would serve.

This back-and-forth can go on forever. But each time you go back and refine the terms, you are performing a type of formalization---specifically, you are formalizing the description of what constitutes satisfaction of the terms of the original agreement. The research question here is, can this kind of formalization be performed automatically, or with the help of artificially intelligent software? We have reason to believe that the answer is 'yes', and are working towards that goal.

Related Publications:
Licato, J. & Zhang, Z. (2019). Evaluating Representational Systems in Artificial Intelligence. Artificial Intelligence Review, 52(2), 1463 - 1493.
Licato, J. & Marji, Z. (2018).
Probing Formal/Informal Misalignment with the Loophole Task. In Proceedings of the 2018 International Conference on Robot Ethics and Standards (ICRES 2018).

PAGI World: A simulation environment for cognitive agents, created in Unity 2D (no longer maintained).

Selected Publications (last updated 2021)

2021

2020

2019

  • Boger, M., Laverghetta Jr., A., Fetisov, N., & Licato, J. (2019). Generating Near and Far Analogies for Educational Applications: Progress and Challenges. In Proceedings of the 2019 ICMLA Special Session on Machine Learning Applications in Education.

  • Ciampaglia, G. L., Licato, J., & Rosen, P. (2019). Visualizing the Evolution of Online Conversation using Discussion Mapper. In Proceedings of the 2019 Spring Symposium on Towards AI for Collaborative Open Science (TACOS).

  • Licato, J. & Cooper, M. (2019). Evaluating Relevance in Analogical Arguments through Warrant-based Reasoning. In Proceedings of the European Conference on Argumentation (ECA 2019).

  • Licato, J., Marji, Z., & Abraham, S. (2019). Scenarios and Recommendations for Ethical Interpretive AI. In Proceedings of the AAAI 2019 Fall Symposium on Human-Centered AI, Arlington, VA, 2019.

  • Quandt, R. & Licato, J. (2019). Problems of Autonomous Agents following Informal, Open-textured Rules. In Proceedings of the AAAI 2019 Spring Symposium on Shared Context.

2018

2017