Variance in Scientific Networks

Overview

My doctoral research is concerned with how communities form opinions, especially with regard to scientific questions. I design and explore computer models which simulate networks of scientists who generate random samples to answer particular scientific questions. In doing so, I reveal unacknowledged and unappreciated features that are inherent to science on a fundamental level, which in turn informs our understanding of more complicated situations.  In addition, I critically analyze the limitations of such techniques and their ability (as well as inability) to reveal different aspects of the situations at hand.

More specifically, I represent scientists as members of a network (shown on the right) who each produce their own statistical research and share it with anyone they are connected to. I decipher the effects of different network structures as well as sharing, trusting, and updating strategies for the scientists. The result is novel, elegant explanations of multiple epistemic community issues such as polarization and group think. Unlike previous explanations, these rely solely on the nature of statistical data and its distribution, which works independently from whether or not the agents themselves are rational.

This page presents a brief overview of some of my research projects. Please do not hesitate to contact me concerning any one of them!

My PhD dissertation includes a full explanation of the models and their results.

Use these web versions right in your browser!

The Generalized Zollman Effect

Is more communication always better? In this project, I investigate the effects of the amount of connectivity on a network of researchers. I show that less communication (sharing of results) can actually be useful for the researchers. This is because less communication causes their aggregate opinion to be more predictable.


One upshot of this project is that it reveals a previously ignored value of the diversity of opinions. That is, agents having different opinions from one another can be useful for a community by leading to greater reliability. This is similar to previous projects like Zollman, 2007 which argue for a value of diversity of opinions. But in that project and others like it, the value of diversity is tied to it causing a value of research paths. They do not mention whether or not a diversity of opinions is valuable when it does not affect what is researched. However I show that there is a usefulness to diversity that is independent of its impact on research paths chosen. I show impacts of connectivity that rely purely on information flow - not what information exists.

Useful Epistemic Diversity

Diversifying opinions can make a community more reliable, but not all changes which cause diverse opinions also lead to reliability. In this project, I home in on exactly what sorts of diversity-inducing changes also lead to increased reliability. Two hypotheses are tested and rejected, leading to the development of a third hypothesis that identifies the appreciation of diverse evidence as the key mechanism driving the advantages of diversifying opinions. That is, the evidence generated by the agents has its own average and variance. Changes that make the distribution of opinions closer to that of the evidence they generate are ones that increase the reliability of the community as a whole.


In the process of finding the root cause of diversity's usefulness, I investigate multiple features of a scientific community including how imprecise the researchers are as well as a form of stubbornness (or intransigence) with regard to others' results.  Most interestingly, I show the different effects of two different aspects of the researchers' network: amount of connections and the centrality of those connections. In the previous project, I show that increasing connections decreases diversity. One might think that the same happens if the connections on a network are centralized so that some agents are more connected than others. But I show this is not the case. Centralizing connections turns out to diversify opinions. And this turns out to be a form of diversity that is not useful in increasing predictability.

Siloing, Extremitization, & Heterophily

I show how variance in scientific evidence can lead to variance in scientists' opinions in two distinct ways: siloing and extremitization. Siloing occurs when agents generate evidence from identical distributions but temporarily diverge in their opinions due to independent pools of evidence. Extremitization occurs when pools of evidence are correlated, causing agents to pull each other further from the average than they otherwise would go. This provides a detailed understanding of how differing opinions result from features inherent to science itself, not merely imperfect human tendancies. 


The project investigates trust strategies as ways of mitigating or exacerbating this diversity, while also considering how predictable these strategies make the community. The results show that trusting those one disagrees with and distrusting those one agrees with (heterophily) can limit the variance caused by scientific evidence while keeping the community more predictable than alternative trust strategies. This demonstrates how a community can be predictable without allowing for diversity.

Echo-Chambers, Entrenchment, & Group-Think

I show that heterophily, a strategy in which agents trust each other less when they agree more, can help mitigate severe challenges imposed by biased researchers, those whose evidence is distorted to support their view. I provide a novel characterization of echo-chambers, entrenchment, and group-think. This involves echo-chambers which consist of feedback-loops, as opposed to the siloing away of information. These echo-chambers can manifest as either entrenchment (a community forever split on an issue) or group-think (a community unanimous on an arbitrary answer) depending on other conditions such as the amount of connectivity. 


I compare heterophily with four other strategies: credulism (full trust), skepticism (no trust), bias-avoidance (trusting others less to the extent they are biased) and homophily. This final strategy is one where agents trust each other more when they agree more, and versions of it have been explored demonstrating both good and bad effects. This chapter has three upshots: i) It explains how a very simple, yet real form of bias can affect communities. ii) It provides a new understanding of echo-chambers (which does not depend on silos) as well as the relationships between them, entrenchment, and group-think. iii) It shows how heterophily can mitigate the effects of echo-chambers.

Selective Disclosure

I show that the way in which information is shared can have effects tantamount to manipulating data. I show that even a seemingly altruistic strategy for sharing information, only sharing studies that one believes increases the accuracy of their peers' opinions, can cause echo-chambers which result in entrenchment or group-think depending on other parameters. 


Additionally, I consider the effects of trust strategies when combined with sharing strategies. I show that agents trusting others to the extent that they disagree with them, heterophily, is able to undermine these echo-chambers.