Alaoui Larbi (Universitat Pompeu Fabra)
What's in a u (co-authored with Antonio Penta).
We revisit the long-lasting debate about the meaning of the utility function used in the standard Expected Utility (EU) model. Despite the common view that EU forces risk aversion and diminishing marginal utility of wealth to be pegged to one another, here we show that this is not the case. Diminishing marginal utility for money is a reason for risk-aversion, but it need not suffice for it, nor need it be its sole determinant. The attitude towards ‘pure risk’ is also a contributing factor, and it is independent from the former. They can be separately identified, and they both concur to the overall attitude towards risk. We discuss several implications of this result, including: (i) questions of identification; (ii) a new perspective on the implications of Rabin’s Paradox; (iii) a novel Paradox for Prospect Theory; (iv) empirical measures of risk via self-reported questionnaires and in multi-context settings.
The rationalizability of survey responses (co-authored with Miguel Ballester).
We propose and study the concept of survey rationalizability, that we base on classical item response theories in psychology. Survey rationalizability involves positioning survey questions on a common scale such that, in the main case of attitudinal surveys, each respondent gives higher support to questions that are more aligned with her views. We first demonstrate that ideas from standard revealed preference analysis can be used to characterize when and how dichotomous surveys are rationalizable. We then show that these results readily extend to more general surveys. Furthermore, we investigate the identification of the models and extend the analysis in several directions.
Bach Dong-Xuan (Bielefeld University)
Unanimity of Two Selves in Decision Making (co authored with Pierre Bardier and Van-Quy Nguyen).
We propose a new model of incomplete preferences under uncertainty, which we call unanimous dual-self preferences. Act f is considered more desirable than act g when, and only when, both the evaluation of an optimistic self, computed as the welfare level attained in a best-case scenario, and that of a pessimistic self, computed as the welfare level attained in a worst-case scenario, rank f above g. Our comparison criterion involves multiple priors, as best and worst cases are determined among sets of probability distributions, and is, generically, less conservative than Bewley preferences and twofold multi-prior preferences, the two ambiguity models that are closest to ours.
Borie Dino (Nantes University)
Revealed Ambiguity and Comparative Probability (co authored with Yann Rebillé).
This paper deals with the derivation of the rational core of a complete relation over events. Such a relation naturally arises when likelihood estimations are required within environments that involve ambiguity. Without structural conditions, we introduce the rational core of a relation over events and show that it can be represented by a set of priors.
Cerreia-Vioglio Simone (Universita Bocconi)
Robust Mean-Variance Approximations (co-authored with Fabio Maccheroni and Massimo Marinacci).
We study mean-variance approximations for a large class of preferences. Compared to the standard mean-variance approximation that only features a risk variability term, a novel index of variability appears. Its neglect in an empirical estimation may result in puzzling inflated risk terms of standard mean-variance approximations.
Cornet Bernard (University of Kansas, Université Paris Panthéon-Sorbonne)
Financial Markets with Hedging Complements (co-authored with Alain Chateauneuf).
This paper considers financial markets with bid-ask spreads and studies the class of markets with hedging complements, a property formalized by the complementarity of its hedging price, in the same way as strategic complements is defined on agents' payoff functions in game theory. The class of markets with hedging complements contains both markets with frictionless securities and the larger class of markets with independent marketed securities together with the frictionless bond, assuming both are arbitrage-free. Moreover, the hedging prices of the latter markets are proved to satisfy a tractable explicit formula, as the sum of a ``generalized" convex Choquet integral and of a modular term. Finally this class of markets also satisfying the put-call parity of Cerreia-Vioglio, Maccheroni, Marinacci, and Montrucchio (2015) in JET.
Curello Gregorio (University of Mannheim)
Voting over time and space (co-authored with John Quah and Bruno Strulovici).
A committee selects an alternative from a multi-dimensional set via a dynamic voting procedure. We exhibit a general way of ranking the agents’ preferences, and a monotonicity condition on the voting procedure, which guarantee that a central agent obtains her first-best outcome in equilibrium. Moreover, the selected alternative and the behaviour of strategic agents is unaffected if some agents vote sincerely. The framework encompasses committees facing dynamic stochastic decision problems, and selecting an action in each period. We derive applications to budget allocation problems and jury deliberations, among others.
Fukuda Satoshi (Universita Bocconi)
Are the Players in an Interactive Belief Model Meta-certain of the Model Itself?
Are the players ''commonly certain'' of an interactive belief model itself? The paper formalizes what it means by: ''a player is certain of her own belief-generating map'' or ''the players are certain of their belief-generating maps (i.e., the model).'' The paper shows: a player is certain of her own belief-generating map if and only if her beliefs are introspective. The players are commonly certain of the model if and only if, for any event which some player i believes at some state, it is common belief at the state that player i believes the event. This paper then asks whether the ''meta-common-certainty'' assumption is needed for epistemic characterizations of game-theoretic solution concepts. The paper shows: common belief in rationality leads to actions that survive iterated elimination of strictly dominated actions, as long as each player is logical and certain only of her own strategy and belief-generating map.
Gilboa Itzhak (HEC Paris)
Association Rules: An Axiomatic Approach (co-authored with Gabrielle Gayer, Stefania Minardi, and Fan Wang).
We consider a reasoner who generates predictions using association rules, each of which can be viewed as a conditional statement regarding observed binary variables x, and making a prediction about another binary variable, y. Rules provide support to their predictions, which is aggregated in an additive way. The weight of each rule depends on the database of observations, and is aggregated over all observations in which the rule applied. We provide axioms on a reasoner, who makes predictions given databases of observations, who can be modeled as following this rule-based prediction. Generalizations and applications are discussed.
Guerdjikova Ani (University of Grenoble Alpes)
Do You Know What I mean? A Syntactic Representation for Differential Bounded Awareness (co-authored with Evan Piermont and John Quiggin).
This paper provides methodological foundations for the study of learning and interactions in the presence of bounded awareness by exploiting the in- terplay between semantic (state space) and syntactic (propositional) repre- sentations of awareness. Differential awareness both in terms of coarsening and restriction of an underlying objective state space gives rise to distinct languages. Information signals, considered in semantic terms as observations on a partition of the state space, can be expressed in terms of propositions in the language of the sender and have to be translated into the language of the receiver. We define a translation operator between two languages which preserves the meaning of propositions in a restricted sense (subject to the expressive power of the languages). Since languages represent different states of awareness, the translation operator in general fails to preserve log- ical operations. We study the properties and provide a characterization of the translation operator defined with respect to an objective common state space. This approach allows us to compare languages with respect to their expressiveness and thus, w.r.t. to their degree of awareness of the underlying objective state space.
Li Chen (Erasmus University Rotterdam)
The cost of going against the tide: the effect of stereotypes on ambiguity attitudes (co-authored with Cedric Gutierrez and Marine Hainguerlot).
This paper investigates whether people's ambiguity attitudes and beliefs about others' competence depend on the stereotypicality of others' profiles. We found that people are more ambiguity averse about men's performance in a task that they are expected to perform well when they have a counter-stereotypical background. Also, people hold more pessimistic beliefs about them, and the effect of these pessimistic beliefs are comparatively enlarged by judges being less a-insensitive towards them. In this sense, counter-stereotypical men were punished both in terms of ambiguity attitudes and beliefs about their competence. On the other hand, we did not observe such effects for counter-stereotypical women. Learning of new information diminished the penalty for men in terms of ambiguity attitudes.
Mandler Michael (Royal Holloway College, University of London)
Decision-making for extreme outcomes.
I consider decision-making in the face of two types of extreme outcomes: impoverishment catastrophes that can lead to very low consumption, first analyzed by Weitzman, and extermination events such as large meteor strikes that cut short an indefinitely long flow of utility. Requiring that orderings of policy options cannot be overturned by a small change in the probability distributions of outcomes will block any ranking of options. But the maxmin rule for multiple priors combined with an overtaking criterion can issue robust rankings. In the impoverishment setting, decision-makers should minimize the likelihood of the tail event of very low consumption while in the extermination setting they should ignore the tail event where civilization survives until the very distant future. The former conclusion provides a rationale for Weitzman's Dismal Theorem while the latter conclusion validates conventional policy comparisons based on discounting.
Monet Benjamin (University of Paris Panthéon-Assas)
Ambiguity, randomization and the timing of resolution of uncertainty (co-authored with Vassili Vergopoulos).
The classic framework of Anscombe and Aumann (Ann Math Stat 34:199–205, 1963) for decision-making under uncertainty postulates both a primary source of uncertainty (the “horse race”) and an auxiliary randomization device (the “roulette wheel”). It also imposes a specific timing of resolution of uncertainty as the horse race takes place before the roulette is played. While this timing is without loss of generality for Subjective Expected Utility, it forbids plausible choice patterns of ambiguity aversion. In this paper, we reverse this timing by assuming that the roulette is played prior to the horse race and we obtain an axiomatic characterization of Choquet Expected Utility that is dual to that of Schmeidler (Econometrica 57(3):571–587, 1989). In this representation, ambiguity aversion is characterized by an aversion to conditioning roulette acts on horse events which, as we argue, is more plausible. Moreover, it can be larger than in Schmeidler’s model. Finally, our reversed timing yields incentive compatibility of the random incentive mechanisms, frequently used in experiments for eliciting ambiguity attitudes.
Mononen Lasse (University of Bielefeld)
Dynamically Consistent Intergenerational Welfare.
Dynamic consistency is crucial for credible evaluation of intergenerational choice plans that inherently lack commitment. We offer a general characterization for dynamically consistent intergenerational welfare aggregation. The aggregation is characterized by envy-guilt asymmetry in discounting with respect to future generations' utility: Higher utility than future generations' utility is discounted differently than lower utility than future generations' utility. This offers a simple and tractable characterization for the dynamically consistent choice rules.
Mukerji Sujoy (Queen Mary University of London)
Persuasion with Ambiguous Communication (co-authored with Xiaoyu Cheng, Peter Klibanoff, and Ludovic Renou).
This paper explores whether and to what extent ambiguous communication can be beneficial to the sender in a persuasion problem, when the receiver (and possibly the sender) is ambiguity averse. We provide a concavification-like characterization of the sender’s optimal ambiguous communication. The characterization highlights the necessity of using a collection of experiments that form a splitting of an obedient (i.e., incentive compatible) experiment. Some experiments in the collection must be Pareto-ranked in the sense that both players agree on their payoff ranking. The existence of a binary such Pareto-ranked splitting is necessary for ambiguous communication to benefit the sender, and, if an optimal Bayesian persuasion experiment can be split in this way, this is sufficient for an ambiguity-neutral sender as well as the receiver to benefit. Such gains are impossible when the receiver has only two actions. The possibility of gains is substantially robust to (non-extreme) sender ambiguity aversion.
Payro Chew Fernando (Universitat Autònoma de Barcelona)
Modeling the modeler: A normative theory of experimental design (co-authored with Evan Piermont).
We consider an analyst whose goal is to identify a subject's utility function through revealed preference analysis. We argue the analyst's preference about which experiments to run should adhere to three normative principles: The first, Structural Invariance, requires that the value of a choice experiment only depends on what the experiment may potentially reveal. The second, Identification Separability, demands that the value of identification is independent of what would have been counterfactually identified had the subject had a different utility. Finally, Information Monotonicity asks that more informative experiments are preferred. We provide a representation theorem, showing that these three principles characterize Expected Identification Value maximization, a functional form that unifies several theories of experimental design. We also study several special cases and discuss potential applications.
Pennesi Daniele (University of Torino)
Event valence and subjective probability (co-authored with Adam Brandenburger, Paolo Ghirardato and Lorenzo Stanca).
In the world of subjective probability, there is no a priori reason why probabilities — interpreted as a willingness to bet—should necessarily lie in the interval [0, 1]. We weaken the Monotonicity axiom in classical subjective expected utility (Anscombe and Aumann, 1963) to obtain a representation of preferences in terms of an affine utility function and a signed (subjective) probability measure on states. We decompose this probability measure into a non-negative probability measure (“probability”) and an additive set function on states which sums to zero (“valence”). States with positive (resp. negative) valence are attractive (resp. aversive) for the decision maker. We show how our decision theory can resolve several paradoxes in decision theory, including “hedging aversion” (Morewedge et al., 2018), the conjunction effect (Tversky and Kahneman, 1982, 1983), the co-existence of insurance and betting (Friedman and Savage, 1948), and the choice of dominated strategies in strategy- proof mechanisms (Hassidim et al., 2016). We extend our theory to allow for a stake- dependent (and non-additive) willingness to bet, which also relaxes our earlier constraints on how valence can behave.
Piermont Evan (Royal Holloway College, University of London)
How to Incentivize Experts to Reveal Novel Actions.
I examine how to incentivize an expert to reveal novel actions, expanding the set from which a decision maker can choose. The chosen action determines payoffs to both players. I show the outcomes achievable by any (incentive compatible) mechanism are characterized by iterated revelation mechanisms: simple dynamic mechanisms wherein each round the expert chooses to reveal novel action and the decision maker proposes a contract that the expert can accept or reject; the IRM ends after rejection or when nothing novel is revealed. Robust IRMs—those that maximize the worst-case outcome—delineate the maximal payoff achievable by any efficient mechanism.
Quah John (National University of Singapore)
Money Pumps and Bounded Rationality (co-authored with Joshua Lanier and Matthew Polisson).
We study the senses in which the amount of money that can be pumped from a consumer is related to the ‘size’ of the consumer’s departure from utility-maximization.
Wakker Peter (Erasmus University Rotterdam)
Source Theory: A Tractable and Positive Ambiguity Theory (co-authored with Aurélien Baillon, Han Bleichrodt, and Chen Li).
This lecture introduces source theory. In one sentence, it shows how probability weighting functions can be used to analyze Ellsberg’s ambiguity aversion (unknown probabilities). Further, this can be done tractably in Savage’s framework of state-contingent assets. Commonly, Anscombe-Aumann’s framework (AA), rather than Savage’s, is used to study ambiguity. Drawbacks are that AA uses empirically complex two-stage gambles, makes multistage optimization assumptions that are controversial under ambiguity, and needs expected utility for risk which is descriptively problematic. These drawbacks are avoided in Savage’s framewok. Researchers invariably used AA because it gives convenient mathematical linearity, whereas Savage’s framework was considered too intractable for ambiguity. We use Savage’s framework and show the opposite: without linear algebra we still get intuitive preference axioms, tractable maths and calculations, convenient Arrow-Pratt transformations for weighting functions, and visual graphical representations of ambiguity attitudes. Source theory is empirically realistic and prescriptively implementable.