Horizon 2020 Project:

Information Diffusion on Networks (ION)

funded through the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-

Curie grant agreement No. 793769.

I have been awarded a Marie Sklodowska-Curie fellowship, part of the Horizon 2020 programme of the EU, to study the diffusion of information, in particular misinformation, on networks.  The project ran from 2019-2022. Its aim was to analyse the simultaneous diffusion of multiple pieces of information on (social) networks; how related messages may interact, how rumours diffuse in the presence of truthful information and verification possibilities, and how network structure interacts with the virality of information.

 Misinformation can have severe consequences, such as AIDS denialism or the debunked myth of a link between vaccinations and autism. The COVID19 pandemic has brought the importance of accurate information regarding vaccine safety and efficacy into the spotlight again. Political examples also abound. The rise in communication through online (social) networks is often quoted as contributing to an increased spread of rumours and misinformation. Yet, in truth the literature so far has had rather little to say about how alternative pieces of information are interacting on networks, especially if rumours may be debunked. On this page, I provide short (non-technical) summaries of the main results. 

This project was carried out at Ca'Foscari University of Venice and its core is comprised of the following three papers:


This paper employs the SIS (Susceptible-Infected-Susceptible) framework from epidemiology to study how two pieces of information (memes) interact when they diffuse simultaneously on a network. The starting point is the assumption that, similarly to print or TV media, also in conversations it is often not possible for people to communicate all the pieces of information they are aware of, simply because time to talk is limited. As a benchmark, in this paper, each person has an intrinsic preference for one type of information (say, sport news, political news, celebrity gossip,...) and if somebody is aware of two memes, at a meeting, they talk about the meme they prefer. These memes are unrelated to each other. First of all, my results show that, whenever one meme becomes endemic in the population, so does the other, as long as each meme is preferred by at least some people in the population. Hence, I find information to be resilient. This prediction seems to be in line with the vast number of topics that simultaneously diffuse on (online) social networks. Second, if the population is segregated completely according to their information preference, then no person will ever be informed of both memes, and in each group, only the preferred meme survives. This means, some form of polarization happens, which is due to polarized communication patterns. Finally, under segregation, fewer people are informed of either meme, although more are informed of their preferred meme. Segregation therefore may not only lead to polarization, but a loss of information overall. However, as more people are informed of their preferred meme, they may have an incentive to segregate, even if society as a whole would be better informed (i.e., more people being informed) if people were to talk also to those with different meme preferences. 

This paper shows how even a quite stylized model of information diffusion is able to capture many aspects of communication that are of concern. The simple assumption that people prefer to talk about those topics that interests them drives the result that information is resilient, that segregation leads to a form of polarization, as well as the result that segregation is harmful for overall information prevalence. The stylized nature of the model allows clear results, and sheds some light on the complicated interactions that different pieces of information have in the diffusion process. 

The paper was published in Games and Economic Behavior in 2019 (link) and a pre-print version is available here


In this paper, my co-authors and myself study the diffusion of two directly related messages. In particular, we study the diffusion of a true and a false message (the rumor) in a social network. People can be identified by their general world-view, and while one of the two messages reinforces it, the other opposes it. Upon hearing a message, people may believe it, disbelieve it, or debunk it through costly verification. We employ the SIS framework, and we are really interested in how the fact that people can choose how much effort to employ to verify information affects the prevalence of correct and incorrect information (truth and rumor). Combining the quite mechanical diffusion process of the SIS framework with endogenous choices of individuals is methodologically novel, and allows us to ask meaningful questions, while keeping the model tractable enough to also find answers to these questions. Our first result is reminiscent of the first paper of the project, "Diffusion of Multiple Information": Whenever the truth survives, so does the rumor. Intuitively, what happens is that verifying information is costly, even if these costs may be low, and as a result people never have an incentive to verify to such an extent that the rumor entirely disappears (if they were sure that what they've been told is the truth, they would have no incentive to verify in the first place, and without verification the rumor thrives). Thus, even rumors that are verifiable may survive, even though people all are better off knowing the truth, and they have no incentive to spread rumors. We also find that while ease of communication itself increases absolute rumor prevalence, is irrelevant for relative rumor prevalence (meaning, the prevalence of the rumor relative to that of the truth). This is an interesting result, as in discussions about the prevalence of rumors, one argument has been that increases in online communication may have increased rumor prevalence simply by making communication easier, and allowing more people to post their opinions. Our result stresses that this channel does exist, but that increased communication also benefits the truth, and that it does so proportionally. In relative terms, ease of communication is irrelevant. Finally, we find that another factor that is often blamed for a (perceived) increase in rumor prevalence, namely homophily (the tendency of people to interact primarily with others that are similar to themselves) may have a much more intricate effect on rumor prevalence. One the one hand, as arguments about echo-chambers claim, we find in our model that increases in homophily imply that people tend to receive relatively more messages that are aligned with their view of the world, and these get verified less than opposing messages. Through this channel, homophily lowers verification and benefits rumors. However, on the other hand, people are aware that messages they receive from others who share their view of the world are less likely to have been verified. Therefore, they are deemed less trustworthy, and people respond by verifying more. This channel increases verification and reduces rumor prevalence. We find that which of these two effects dominates depends on the exact form in which efforts to verify are successful, something that is very difficult to estimate. 

Overall, our model highlights that successful policies in the fight against rumors increase individuals’ incentives to verify. Instead, policies that target the degree of homophily in a social network, or that aim to increase the amount of correct (or decrease the amount of incorrect) messages in circulation instead may backfire. The paper has been accepted at AEJ: Micro, and the working paper can be downloaded here


In the third paper of the project, we continue to employ the SIS framework with agents of differing worldviews, but we investigate not the choice of individuals, but that of a social planner, whose aim it is to maximise the prevalence of truthful information in the society by influencing the proportion of the people who may verify/inspect the messages they receive. The idea is that, while verification effort is clearly an individual choice (as in "Debunking Rumors in Networks"), it can be influenced, or even made possible, by government policies, for example the amount of resources employed to increase information literacy in the population. We find that rumors may in fact benefit the prevalence of the truth if people ignore messages that oppose their worldview unless they inspect them for veracity. In such a setting, some people become aware of the truth only after hearing a rumor which they verified. Therefore, while an increase in inspection rates will always decrease rumor prevalence, there are instances where this will also decrease the prevalence of the truth. In short, it may be optimal for a planner to allow a rumor to circulate, even if it was possible to completely eradicate it. 


The papers that are the results of this project replicate various stylized facts of information diffusion that are observed. Hopefully, they are able to help increase our understanding about the various factors that influence under which scenarios rumors thrive, and how policies to decrease rumors or increase truthful information may be designed.