Abstract: Traditional models of political competition often treat ideological preferences and campaign resource allocation as separate phenomena. This paper bridges this gap by introducing a theoretical framework that integrates the Hotelling-Downs spatial voting model with electoral Blotto games. We analyse a two-party system where the electorate is partitioned into distinct social groups, each characterised by a specific population share and an internal distribution of voter ideologies. In our model, a voter's utility is determined jointly by her ideological distance from the winning party and the utility gained from the campaign resources targeted at her group. Assuming that the parties maximise their vote share minus the cost of campaign spending, we characterise their optimal campaign strategies. Our interior equilibrium results imply that the groups' marginal utility gains from spending are inversely proportional to their densities at the ideological cutoffs. As a result, parties systematically direct more resources toward groups with denser `battlegrounds'. Furthermore, the model establishes a positive correlation between competing parties' spending strategies, as the groups' marginal utility gains from spending are proportional across parties. Finally, we also introduce a monopolist social media platform as the key arena of political competition and see how it affects the outcome. This framework provides a novel mechanism for understanding the interplay between group-based campaign strategies and voter polarisation.
Abstract: We study a unique course allocation problem with contracts. Courses have lexicographic preferences, favouring students from higher-priority groups, and within these groups, students taking the course with higher-priority contract terms. Students have preferences over sets of course-term pairs, but can only signal a ranking over singletons and a capacity for each contract term, not listing the same course with multiple terms. We compare the performance of five mechanisms: the HBS draft, its slight modification (SZISZ), the random serial dictatorship (RSD), and the deferred acceptance with single (DASTB) and multiple tie break (DAMTB). We check whether they satisfy certain desiderata (strategy-proofness, efficiency, pairwise stability), and prove that stability is incompatible with strategy-proofness and with possible student-efficiency. Using preference data, we then simulate each mechanism several times and compute some welfare indicators. While the RSD and DASTB satisfy more theoretical desiderata, the SZISZ and HBS draft perform better in data-based welfare indicators.
Abstract: We propose a general framework for strategic interaction that relaxes the expected utility assumption and instead works directly with players’ conditional preference relations (Gilboa & Schmeidler, 2003; Perea, 2025). By leveraging their epistemic foundations, we show that three classical solution concepts admit natural generalisations to this broader setting without altering their underlying reasoning principles. First, we introduce the iterated elimination of never-optimal choices as an analogue of the iterated elimination of strictly dominated choices, characterised by common belief in rationality. We also prove that it always yields a non-empty solution. Second, we generalise Nash equilibrium, preserving its characterisation by common belief in rationality and simple beliefs. Third, we extend correlated equilibrium, characterised by common belief in rationality and a common prior. While the latter two solution concepts are not guaranteed to exist in all games, we identify a sufficient condition for their existence, which we call strong continuity. This property requires the set of beliefs where a choice is weakly preferred to another to be closed, for any player and any choice pair. We also show that this condition is equivalent to imposing a continuous utility representation on the game, but weaker than imposing an expected utility representation. Our results demonstrate that the foundations of game theory are robust to the relaxation of expected utility, opening the door to richer and more flexible models of strategic interaction.
Abstract: We study informative advertising strategies of two entrants competing with a fully known incumbent in a homogeneous-goods Bertrand market. Entrants must advertise in order to be considered by consumers. We compare a benchmark case without targeted advertising to a setting where entrants can target specific consumer groups. Without targeting, entrants independently inform half of the consumers, leading to symmetric price competition. With targeting, equilibrium advertising becomes asymmetric and nested: the smaller entrant only informs consumers who are also aware of the larger entrant, while the larger entrant reaches some of those unaware of the smaller. This nested pattern arises because entrants avoid intense price competition by leaving some consumers captive to the incumbent, and the smaller entrant even limits its presence among the larger entrant's consumers. This structure segments the market into overlapping duopolies, softening price competition. Targeting also redistributes surplus towards the larger entrant. The results help rationalise persistent price dispersion after entry and show how targeting technologies reshape competitive structure.
Abstract: In pharmaceutical markets, entries of generic firms are often followed by an increase in brand-name prices. This paper provides a theoretical explanation for this so-called `generic competition paradox' and measures its welfare effects. We present a model of a horizontally differentiated market with a high and a low segment. We consider two different settings: a monopoly game where the product is sold by a single incumbent, and a competitive game where an entrant also enters the market. For low-segment consumers, the two firms differ only in their prices and locations, but high-segment consumers are brand-loyal (i.e. they are willing to purchase the incumbent's product only). We find that, under certain parametrizations, a monopolist incumbent is active in both segments, but the entry makes it focus only on its brand-loyal consumers, leaving the entire low segment for the entrant. We refer to this latter equilibrium as `strategic separation', as it means that the two firms become local monopolists in two different segments. It is the shift in the incumbent's focus that explains the generic competition paradox: with the low segment lost, it no longer needs to keep its price at a lower level. Our welfare analysis suggests that although the entry increases the aggregate welfare, it also decreases the surplus of consumers. This finding has important policy implications: if a social planner focuses on the consumers' side, pushing for patent expiry may be counterproductive.
Abstract: This paper investigates how platform rules affect the magnitude and composition of social surplus on secondary ticket markets. We first analyse a large dataset we have scraped from a popular secondary ticket market platform, TicketSwap. Our key variable in this analysis is the ratio between a ticket's secondary and primary prices. We examine its distribution, and then run fixed-effect regressions to see how other variables influence it. In particular, we find that these price ratios tend to decrease as the event comes closer in time. Next, we design a theoretical model of ticket markets. In this model, buyers and secondary sellers arrive at the market according to two Poisson processes. Arriving secondary sellers price their tickets in a way that maximises their expected payoffs, while arriving buyers simply base their purchase decisions on their reservation prices. We then run simulations based on this model, using two different specifications. The first (`constrained') specification reflects the rules of TicketSwap: the platform takes a service fee after each transaction, and the prices have lower and upper limits. The second (`unconstrained') specification does not include these rules. Having obtained the two simulated datasets, we conduct the same analysis on them as on the TicketSwap data. We find that the constrained artificial dataset behaves very similarly to the one we have scraped from TicketSwap. Hence we argue that our model provides a sufficient description of ticket markets. Finally, we use the two simulated datasets to investigate the welfare implications of TicketSwap's constraints. We find that although both datasets are far from being Pareto-optimal, the constraints result in only a slight decrease of welfare. Most of the inefficiency is caused by the market's dynamic nature, which cannot be eliminated unless the platform uses auctions. However, when we also check the composition of social surplus in the two cases, we can see that TicketSwap's rules redistribute a substantial amount of welfare from the group of secondary sellers to the group of buyers. Hence we conclude that the constraints are almost neutral for society as a whole, but strictly beneficial to buyers.
Abstract: When playing random games sequentially, people tend to modify their expectations about the next game based on the outcomes of previous games. Two forms of this phenomenon are the so-called hot hand and gambler's fallacies. In the literature several papers have tried to provide empirical evidence that people do commit these fallacies when betting on random events. Most of these papers support their claims by showing that gamblers tend to change their bet amount and odds after a winning or losing streak. However, such streaks might affect betting behaviour not only via changes in expectations. We can also provide a prospect-theoretical explanation by assuming that though such a streak changes the gamblers' current asset positions, their reference points do not shift accordingly. This paper aims to answer the question whether the aforementioned observations can be explained by prospect theory or indeed provide sufficient evidence of gamblers committing the two fallacies. To answer this question, we build a prospect-theoretical model of betting decisions. In this model, the gamblers' expectations are not influenced by the outcomes of previous games. We derive the optimal decision of the gamblers in our model depending on their parameters. We then run random simulations based on our model's findings. We examine how some of the simulated gamblers behave and then compare this artificial dataset with a real-life dataset containing the data of more than hundred thousand online bets. We find that on the aggregated level the two databases are not similar to each other. However, the differences between the two might be the result of the randomised parametrisation of our simulations. From the individual-level analysis we learn that there are some artificial gamblers who exhibit the streak-dependent behaviour we discussed. And since the two fallacies played no role in the underlying model of the simulations, we conclude that the empirical observations do not necessarily indicate their presence. Therefore, what the literature often sees as a result of the hot hand and gambler's fallacies might just be a simple prospect-theoretical phenomenon.
Abstract: This paper presents a model of social media platforms with a novel approach. Unlike in previous models where platforms maximised their profits by choosing prices for the two sides, here their only decision variable is their advertising intensity. The equilibrium in this model is reached in a sequential game between three types of agents: platforms, advertisers, and users. We consider two different specifications. In the first case, there is only one monopolist platform, while in the second case there are two platforms forming a competitive duopoly (where users single-home and advertisers multi-home). Our main research question is how competition between platforms affects the aggregate welfare in the economy. To answer this question, we derive the subgame perfect Nash equilibria for both cases and compare the two results. We find that there is no clear relation between the monopoly welfare and the competitive welfare. Which one is bigger depends on the values of the model's parameters. Therefore, the classical notion that competition is always better than a monopoly does not necessarily hold for social media platforms. This is a very interesting result, which is especially relevant in recent discussions about how to regulate social networks. Since the socially optimal solution varies from market to market, a social planner must be very careful and collect all available information about the market it wants to regulate.
Abstract: This paper aims to build a more realistic dynamic model of how people find their partners. In this model, there is a finite number of periods and agents maximise their present value utility. We assume that people do not know how attractive their potential partners are to them until they get to know each other. In each period, they meet exactly one person randomly and decide whether they want to be partners with them or not. We derive the optimal strategy of agents in this game and find that the so-called `gate closing panic' is present. Single people turn more desperate when realising that they have limited time left to find a partner and decrease their standards in response. This phenomenon causes the resulting matching to be unstable and inefficient. Our goal with this paper is to somehow measure the extent of this inefficiency and show how big the deadweight loss of the gate closing panic is. However, this may vary from market to market. Therefore, we run several random simulations with different parametrisations and compute the aggregate welfare of all agents. In each case we compare the result of this mechanism to the optimal one we would obtain using the Gale-Shapley algorithm in the first period. By dividing the two welfares we get a measure of how efficient our mechanism is. After re-running the simulation several times we find that this efficiency measure appears to follow a lognormal distribution. Finally, we also show how this result depends on the chosen parametrisation.
Abstract: This paper presents a rational model of voter decision-making. It is assumed that voter and party ideologies can be described by their location on a one-dimensional spectrum between left and right. We assume two parties and two voters. The utility of a voter depends on how close her ideology is to the winning party’s ideology. One of them, however, is uninformed, i.e. not aware of the exact location of the parties’ ideologies. This paper seeks to answer the question of how the ideologies of voters will affect their turnout. We show that an uninformed voter will abstain if her ideology is in the middle tenth of the scale. This result is referred to as the ‘moderate voter’s curse’: voters with extreme ideologies turn out to the polls in higher rates than moderates. We then examine the positive or negative effects of this phenomenon on the democratic establishment. We conclude that, while there are some negative consequences, the moderate voter’s curse should be interpreted as a response to a problem rather than as the problem itself. The real problem is the asymmetric information between parties and voters. The damaging effects of this are reduced rather than increased by the moderate voter’s curse.