Gerrit Bauch: Correlation Uncertainty: a decision theoretic approach (joint with Lorenz Hartmann)
Research frequently tackles the understanding of complex situations by investigating its constituent aspects. In climate change, researchers focus on estimating parameters like climate sensitivity, the expected sea level rise and albedo. But what uncertainty do we face when combining this statistical evidence to a model of climate change if there is reason to suspect that (some of) these aspects are correlated? This article provides a decision theoretic foundation for aggregating uncertainties from subspaces. Our contribution is two-fold: we first give a full mathematical characterization of the set of possible correlations between subspaces, thereby providing a quantitative and qualitative description of uncertainty about the subspaces’ correlation that allows for (partial) independence of subspaces. Second, we derive an axiomatic characterization of preferences narrowing down the set of correlations a decision maker considers, making behavior about correlation testable.
Paolo Crosetto: Deep Uncertainty, Ambiguity, and Risk: how Ignorance of Lottery Elements Shapes Decisions (joint with Antonio Filippin)
Risk elicited in the lab shows severely limited external validity -- a finding known as the ’risk elicitation puzzle’.
One potential explanation has to do with risk perception as subjects might hold a different representation than economists of the construct of ”risk”. Risk elicitation tasks that force them to choose in an information-rich, surprise-free experimenter-defined context could induce a mismatch with decisions outside the lab in ambiguous or deeply uncertain settings. The subjects might find that the lottery choices we feed them are not "risky" in the sense their daily decisions are.
Another recent explanation cast doubts on the very concept of risk preferences, as behavior commonly labelled as risk aversion can be rationalized by imprecise cognitive processing of numerosities, probabilities, or both, impacted by ignorance. The less subjects know about a situation, the more they might find it "risky", and choose safer options -- without it having to do with "risk attitudes" as postulated in traditional models.
We contribute to a solution of the risk elicitation puzzle taking both these perspectives -- perception and information -- into account. We recruit subjects for which we have rich risk-taking data -- repeated measurement of several risk elicitation tasks, questionnaires, and a "personal risk journal" filled in over 14 days. We repeatedly ask them to choose between a safe and a risky option, of which, to start with, they don't know anything. They are then gradually provided information about the risky option. At each of five distinct steps of information acquisition -- sampling, being told the set of outcomes, thus moving into ambiguity territory, sampling again, then being told about the full set of probabilities, hence moving under risk -- subjects are asked to declare the perceived riskiness of the risky option and to choose between the two. We collect data for risky options encompassing losses, skewness, a varying number of outcomes, and differences in variance.
This design allows us to map external validity (correlation with the lab & field measures previously collected) to the degree of ignorance of lottery elements (deep uncertainty, ambiguity, risk) and to the perceived riskiness thereof. It allows us to identify which theoretical construct (i.e. which stage of the information process) provides a better representation of the subjects' decision process when compared with data on real-life risk taking. It also allows us to see which element of a lottery (variance, skewness, number of outcomes, losses) yields higher perceived risk, and how ignorance translates to safer choices.
Adam Dominiak: Is behavior within and between awareness levels related? (joint with Marie-Louise Viero and Peter Duersch)
Jürgen Eichberger: Data-based decision making under uncertainty (joint with Ani Guerdjikova)
Decision theory in the spirit of Savage (1954) views actions as mappings from a set of states to a set of outcomes. States of the world are supposed to determine the outcomes of actions. The paradigm of a state space is an opaque urn containing balls of different colors. Bets on the color of a ball drawn from this urn are the classic example of an action. Real-world applications of this state-contingent decision model lack such a precise description of states and their relationship to consequences. Empirical applications rely on data from cases of recorded actual decisions.
In this paper, we will study a model where the primitive concepts, states, outcomes and actions, are not exogenously given but derived from data sets of cases. A data set of cases consists of records either from past observations or from measurement actions that are deliberately chosen in order to generate more and better data. We will (i) derive an α-max-min representation from preferences over actions with data-based partial information about features of states and (ii) show by examples how deliberate learning by measurement actions affect data and consequently decisions. Ambiguity, unawareness and awareness arise naturally in this context.
Itzhak Gilboa: Imagination and Planning (joint with Gabrielle Gayer)
We consider a model of case-based planning, where a position is a vector of numbers, and a case is an edge in the directed graph of positions. The planner generates new plans by using cases that are similar to those she has observed in the past. In the benchmark model presented here, similarity is defined by equality of differences (between the target and the source position). We prove a complexity result that shows why planning requires imagination and is not easily done algorithmically. We put this result in the context of learning and expertise in case-based models, distinguishing among information, insight and imagination.
Michel Grabisch: Bel Coalitional Games (joint with Silvia Lorenzini)
We define Bel coalitional games, that generalize classical coalitional games by introducing uncertainty in the framework. Unlike Bayesian coalitional games, uncertainty is modelled through the Dempster-Shafer theory and every agent can have different prior knowledge. We propose the notion of contract in our framework, that specifies how agents divide the values of the coalitions and we use the Choquet integral to model the agents’ preferences between contracts. In a second step, we define the ex-ante core and the ex-t-interim core, where, in the latter, we need the Dempster conditional rule to update the mass functions of agents. In particular, in the last step of the ex-t-interim case and when the set of states reduces to a singleton, i.e. when there is no uncertainty, we recover the classical definition of the core. Finally, we state some results about the non-emptiness of the ex-ante and the ex-t-interim core of Bel coalitional games, taking account of different types of agents’ knowledge and different kind of games.
Simon Grant: Recursive Expected Uncertain Utility and Neo-Additive Sources
I extend the Expected Uncertain Utility (EUU) model to a dynamic setting by considering preferences over information decision problems in which a decision-maker's choice from a menu is contingent on the realization of a signal. Information decision problems are evaluated recursively (plans of action are evaluated by backward induction). Interim preferences are model consistent (both they and the decision-maker's static preferences conform to EUU) and consequentialist (they are invariant to what choice might have been made had the signal's realization been different). Moreover, choices guided by these interim preferences will be dynamically consistent (the decision-maker never has a strict incentive not to follow through with any plan of action that is ex ante optimal).
Joe Halpern: A Causal Analysis of Harm (joint with Sander Beckers and Hana Chockler)
It has proved notoriously difficult to define harm. Indeed, it has been claimed that the notion of harm is a "Frankensteinian jumble" that should be replaced by other well-behaved notions. On the other hand, harm has become increasingly important as concerns about the potential harms that may be caused by AI systems grow. For example, the European Union's draft AI act mentions "harm" over 25 times and points out that, given its crucial role, it must be defined carefully. I start by defining a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of the definition are that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. I show that our definition is able to handle the problematic examples from the literature. I extend the definition to a quantitative notion of harm, first in the case of a single individual, and then for groups of individuals. I show that the ``obvious'' way of doing this (just taking the expected harm for an individual and then summing the expected harm over all individuals) can lead to counterintuitive or inappropriate answers, and discuss alternatives, drawing on work from the decision-theory literature.
Christopher Kops: (Mis)Interpreting the World (joint with Elias Tsakas)
This paper studies distortions due to misinterpretations of the world. Such misinterpretations occur because the agent may associate some false meaning to certain (syntactic) sentences. As a result, the corresponding events in the canonical state space representation are transformed via an interpretation operator. The main characterization results show that an analyst can identify logical mistakes that the agent is carrying out if and only if the agent himself can realize that his own interpretation of the world involves logical contradictions. Turning to the standard model of qualitative belief, we show that beliefs can at most reveal whether the agent is consistent or not, but nothing beyond this. Finally, we show how capacities (and which ones) can naturally arise from an agent’s additive subjective beliefs through the distortions which the agent’s interpretation induces.
Stefania Minardi: Association Rules: An Axiomatic Approach (joint with Gabrielle Gayer, Fan Wang and Itzhak Gilboa)
We consider a reasoner who generates predictions using association rules, each of which can be viewed as a conditional statement regarding observed binary variables x, and making a prediction about another binary variable, y. Rules provide support to their predictions, which is aggregated in an additive way. The weight of each rule depends on the database of observations, and is aggregated over all observations in which the rule applied. We provide axioms on a reasoner, who makes predictions given databases of observations, who can be modeled as following this rule-based prediction. Generalizations and applications are discussed.
Illia Pasichnichenko: Identifying Behavioral Types (joint with Christopher Kops, Paola Manzini, and Marco Mariotti).
Choices are influenced not only by tastes but also by latent cognitive and psychological factors such as attention, attraction to reference points, and cognitive capacity. These factors vary across individuals, leading to a diversity of behavioral types that ultimately shape aggregate choice patterns. This paper presents new identification results that clarify when and how observed choices can reveal this latent behavioral heterogeneity. Our findings provide two classes of identification conditions. The first involves matchings between types and alternatives based on the choice matrices. The second introduces new conditions based on weighted matrix sums, yielding interpretable constraints on the null-spaces of these matrices. The applicability of the results is demonstrated through the identification of relevant features and incomplete preferences.
Evan Piermont: Do You Know What I mean? A Syntactic Representation for Differential Bounded Awareness (joint with Ani Guerdjikova and John Quiggin)
Most analysis in decision theory has been undertaken in a semantic (state space) approach where agents are assumed to have complete, and shared awareness of all possible states of the world. Once the assumption of complete shared awareness is relaxed, it becomes necessary to consider communication between agents who may have different representations of the world. A syntactic (language-based) approach provides powerful tools to address this problem. For a single agent, it is possible to define an isomorphism between semantic and syntactic representations. In particular, a more expressive language is associated with a larger and more refined state space. Considering agents with different languages, the question naturally arises of whether, given an appropriate translation, agents can understand each other. A positive answer to this question implies the existence of a joint state space within which the semantic representations of the two languages can be embedded. In this paper, we define translation operators between two languages which provide a ``best approximation'' for the meaning of propositions in the target language subject to its expressive power. We show that, in general, the translation operators preserve some, but not all, logical operations. We derive necessary and sufficient conditions for the existence of a joint state space and a joint language, in which the subjective state spaces and the individual languages may be embedded. This approach allows us to compare languages with respect to their expressiveness and thus, w.r.t. to the properties of the associated state space.
Marcus Pivato: Global subjective expected utility representations
A single agent may encounter many sources of uncertainty and many menus of outcomes, which can be combined together into many different decision problems. There may be analogies between different uncertainty sources (or different outcome menus). Some uncertainty sources (or outcome menus) may exhibit internal symmetries. The agent may also have different levels of awareness. In some situations, the state spaces and outcome spaces have additional mathematical structure (e.g. a topology or differentiable structure), and feasible acts must respect this structure (i.e. they must be continuous or differentiable functions). In other situations, the agent might only be aware of a set of abstract “acts”, and be unable to specify explicit state spaces and outcome spaces. We introduce a modelling framework that addresses all of these issues. We then define and axiomatically characterize a subjective expected utility representation that is “global” in two senses. First: it posits probabilistic beliefs for all uncertainty sources and utility functions over all outcome menus, which simultaneously rationalize the agents’ preferences across all possible decision problems, and which are consistent with the aforementioned analogies, symmetries, and awareness levels. Second: it applies in many mathematical environments (i.e. categories), making it unnecessary to develop a separate theory for each one.
John Quiggin: Unawareness and the unexcluded middle: Heyting algebras and sober spaces
This paper addresses potentially problematic nature of negation in the context of bounded awareness. The proposed response is to reject the ”postulate of the excluded middle”, which states that if a proposition is not true, its negation must be true The rejection of the excluded middle postulate means that the language may be represented by a Heyting algebra rather than the usual Boolean algebra. The Heyting algebra is not closed under negation, but gives rise to a pseudocomplement. Applying the relevant version of the Stone representation theorem, the language is isomorphic to a lattice of open subsets of the free Boolean algebra generated by the elementary propositions of the language. Agents are defined to be unaware of propositions expressible in the free Boolean extension but not in the original language.
Marie-Louise VierØ: Measurements of Attitudes toward Unawareness (joint with Edi Karni)
Decisions under uncertainty may result in new, unanticipated, consequences. Decision makers may be aware of being unaware of possible consequences of their decisions and take it into account when choosing among alternative courses of action. Decision makers' attitudes toward encountering unanticipated consequences is reflected in their choice behavior. This paper proposes, for the first time, measures of the attitudes toward unawareness, thereby filling a lacuna in the literature on decision making under uncertainty and awareness of unawareness.