Publications:
An Equivalence Between Rational Inattention Problems and Complete-Information Conformity Games (with Ole Jann), 2022, Economic Letters
We consider two types of models: (i) a rational inattention problem (as known from the literature) and (ii) a conformity game, in which fully informed players find it costly to deviate from average behavior. We show that these problems are equivalent to each other both from the perspective of the participant and the outside observer: Each individual faces identical trade-offs in both situations, and an observer would not be able to distinguish the two models from the choice data they generate. We also establish when individual behavior in the conformity game maximizes welfare.
Working Papers:
Economic Theory
Optimally Biased Expertise (with Andrei Matveenko, Maxim Senkov, and Egor Starkov)
We show that in delegation problems, a principal benefits from belief misalignment vis-a-vis an agent when the latter can flexibly acquire costly information. The agent optimally succumbs to confirmatory learning, leading him to favor the ex ante optimal action. We show that the principal prefers to mitigate this by hiring an agent who is ex ante more uncertain about which action is optimal. This is optimal even when the principal is herself biased towards some action: the benefit always outweighs the cost of a small misalignment. Optimally misaligned agent considers weakly more actions than an aligned agent. All results continue to hold when delegation is replaced by communication.
Sequential Search with Flexible Information (with Andrei Matveenko, Salil Sharma, Elias Tsakas, and Mark Voorneveld)
We consider a model of sequential search in which an agent (the employer) has to choose one alternative (a candidate) from a finite set. A key feature of our model is that the employer is free to endogenously choose any interview for each arriving candidate. We characterize the employer's optimal strategy by introducing a difficulty order over interviews and show that the unique optimal policy is monotone with respect to this order. We then study the implications of this structure for hiring outcomes, in particular whether candidates are treated equally in terms of their probability of being hired. For a large number of candidates, the model delivers a unique prediction: the first candidate is favored. Additionally, we show that when candidates differ ex ante, it may be optimal for the employer to start the search with the worse candidate.
Applied OR
Keep it Simple: Addressing Rare Events in Data Synthesis Using Beta Divergence (with Michel Bierlaire)
The iterative proportional fitting (IPF) algorithm remains one of the most widely used tools for generating synthetic data. In this paper, we address the “zero problem” inherent to IPF. Recognizing that IPF solves a well-defined convex problem with affine constraints, we modify the objective by introducing Beta divergence, which generalizes the original problem as a special case. We find that this approach mitigates the zero problem in two ways: it reduces the sensitivity of the solution to imputed values and yields solutions that more closely approximate the true distribution when imputed values are substantially lower than those in the data-generating process. At the same time, the algorithm preserves the simplicity of IPF and remains easy to implement.
Simulation framework for generating synthetic panel data (with Marija Kukic and Michel Bierlaire)
Most existing synthetic data generation methods produce cross-sectional datasets that replicate only aggregated population characteristics, limiting their ability to capture individual-level dynamics over time. This paper introduces a simulation framework for generating synthetic panel data that consistently tracks the same individuals across years. The contribution of this work is threefold: (i) it defines a universal set of time-independent variables representing life trajectories through parametric models informed by demographic literature (ii) it establishes mapping rules to translate these universal variables into time-dependent attributes for any observation year and (iii) it updates model parameters via maximum likelihood estimation using one or more cross-sectional datasets, assessing their impact on time-dependent outcomes. Using data from the Swiss Mobility and Transport Microcensus, we compare data-free and data-integrated implementations of the framework. Results show that the approach produces consistent individual trajec- tories and that data integration enhances the alignment of synthetic samples with observed aggregates. The proposed framework provides a flexible basis for con- structing realistic longitudinal datasets that evolve with new data sources, enabling temporally consistent population modeling and supporting long-term behavioral and policy analyses in the absence of real panel data.