Research

Robust Trust-based Decision Making

In online open systems, there is often the idea that users/intelligent agents communicate relevant information with each other e.g., review sharing in e-commerce, car to car communication in autonomous driving. While such communication enriches decision making, malicious usersers eploit it by introducing fake information, leading to wrong decisions. Trust models attempt to suggest the correct decision based on evaluating the trustworthiness of information sources. However, the typical empirical-based solutions in the literature cannot guarantee the quality of trust-based decisions, faced with the challenge of uncertain and flexible malicious behaviour. Solutions for guaranteed decision robustness is crucial for safety-critical applications and promoting trust in autonomous systems. There are some key questions: what is the worst an attacker can do? What if attackers collude or camouflage? Is it possible to guarantee the decision accuracy while allowing the existence of attackers?

Vehicle to Vehicle Communication

Ad-hoc Network

Multi-Agent System

E-commerce

The Worst-Case Attacks

Given the fact that it is very challenging to precisely predict what strategy an attacker would take, can we look for what would be the worst case for a decision maker? In the works below, we uncover the worst-case attacks and prove that they successfully frustrate several popular trust models. The foundamental assumption here is that the communicated information can be modelled as finite and discrete, and also there is a ground truth. Based on this, the first three works assume no subjectivity for honest users, while the last work considers the impact of subjectivity.

A general view of information communication

A decision maker (truster) aims to learn from feedback, its worse if an attacker makes him learn less. And the worst-case attacks are where the truster learns the least. Three senarios are studied:

Independent attackers -- published in AAMAS 2015

Collusive attackers -- published in IJCAI 2015

Dynamic attack strategies -- published in AAAI 2016

Some interesting findings are: malicious feedback is usually informative; collusion does not strengthen attacks; commonly known camouflage attacks are far weaker than the worst-case attacks.

Guaranteed Decision Accuracy under Attacks

This part of research aims to answer the question: is it possible to guarantee the decision accuracy while allowing the existence of attackers? Robustness is a property for decision making, which is defined as a high probability (can vary among applications) of making correct decisions, given the entire attack space. The ability of making robust decisions is crucial for safety-critical applications like autonomous driving and also security.

We proved for a foundamental reporting scenario that robustness can be achieved based on trust evaluation. The proposed decision scheme also satisfy monotonicity, optimality, and stability property. This work has been published in CSF 2020.

The Impact of Subjectivity on Decision Robustness

Not only information provided by malicious entities can be unreliable, but also those provided by an honest entity. If the information to communicate contains some forms of personal judgements or opinions, then it is very probable that honest entities report differently, due to subjectivity difference or dicrimination. In the literature, subjectivity and malicious repoorting are typically treated orthogonally. It is interesting to study their interplay and how together they influence the robustness of decision making. Some of our primary work published in TIFS2019 shows that subjectivity decreases robustness.