The 2026 Political Networks Conference at Manchester will offer several training workshops, for which you can sign up when registration opens in May. We are currently in the planning stage, but we anticipate we can offer the following workshops.
Please click on the arrows next to each title to display details.
Lorien Jasny (University of Exeter)
Tue, 4 August 2026, 9:00am–12:30pm
Background: How do we measure social networks? What descriptives and metrics are important for answering which kinds of questions? The field has an enormous variety of different ways of measuring and analysing networks and figuring out what you need to measure, let alone how to do it, is a critical part of any research project.
Contents: This workshop introduces a number of the main descriptive measures used in social network analysis. We cover node level indices like degree, betweenness, closeness and eigenvector centrality and brokerage. We will also discuss graph level indices like centralization, clustering, reciprocity and transitivity. Our final topics will include subgroup methods like those based on cliques and clans as well as blockmodeling approaches. Each metric will be introduced and described with reference to further reading before we look at how to perform the analysis in R.
Prerequisites: No previous background in R or Social Network Analysis is presumed, but you should have installed R, RStudio, and the statnet suite of packages as well as igraph. We will mostly concentrate on the sna package (part of the statnet suite), but some use of network (also in statnet) and igraph will be made as well. If you have any issues with installing these materials or any questions on the workshop, please contact the instructor here: L.Jasny@exeter.ac.uk.
Lorien Jasny (University of Exeter)
Tue, 4 August 2026, 1:30pm–5:00pm
Background: Statistical inference with networks is tricky ! We often wish to explicitly model network interdependencies where the existence of one tie depends on the existence of other edges. The Exponential Random Graph Model provides a way of trying to model empirical network data from a series of sufficient statistics—often network motifs—that we can think of as building blocks of observed networks.
Contents: This workshop provides a hands-on tutorial to using exponential-family random graph models (ERGMs) for statistical analysis of social networks, using the ergm package in the statnet suite of packages. The ergm package provides tools for the specification, estimation, assessment and simulation of ERGMs that incorporate the complex dependencies within networks. Topics covered in this workshop include: an overview of the ERGM framework; relationship to logistic regression; types of terms used in ERGMs; defining and fitting models to empirical data; interpreting model coefficients; goodness-of-fit and model adequacy checking; simulation of networks using fitted ERG models; and degeneracy assessment and avoidance.
Prerequisites: This workshop requires R, RStudio, and the statnet suite of packages. Some prior knowledge of R is required although not necessarily statnet. Some prior knowledge of sna would be useful—what network data is and how we describe it would be useful as well. If you have any issues with installing these materials or any questions on the workshop, please contact the instructor here: L.Jasny@exeter.ac.uk.
Ingo Scholtes (University of Würzburg)
Tue, 4 August 2026, 9:00am–12:30pm
(No description yet.)
Ingo Scholtes (University of Würzburg)
Tue, 4 August 2026, 1:30pm–5:00pm
(No description yet.)
Silviya Nitsova (University of Manchester)
Wed, 5 August 2026, 9:00am–12:30pm
Background: Studying political networks empirically can be very challenging. Decisions about how to define actors, identify relationships, and construct network data can have substantial consequences for the conclusions drawn. Importantly, standard data collection approaches may be misleading in network settings and lead to problems that are difficult to address later in the research process.
Content: This workshop will introduce participants to the empirical study of political networks, with an emphasis on linking theory to empirics and making informed design choices. We will discuss how to select nodes and links, choose appropriate network data collection strategies, including surveys, interviews, and secondary data (e.g., administrative, expert, and digital trace data), and consider trade-offs between precision and feasibility. We will pay special attention to strategies for studying hidden network phenomena (e.g., corruption, clientelism, political influence, rebel and militant organization, and illicit trade).
Prerequisites: No prior experience with R or network analysis is required.
Laurence Brandenberger (University of Zürich)
Wed, 5 August 2026, 1:30pm–5:00pm
Background: Network data in the social sciences is inherently complex. It often involves multiple types of entities, relationships, and dimensions, such as multimodal, temporal, or weighted structures. As datasets grow in size and scope, managing them becomes increasingly challenging. Researchers frequently find themselves working with fragmented data stored across multiple CSV files, spreadsheets, or relational databases.
In such settings, the difficulty is not only analytical, but also structural: How can we store, connect, and access complex data in a way that remains scalable, transparent, and reproducible?
Knowledge graphs offer a powerful solution to these challenges. By representing data as interconnected entities and relationships, they provide a flexible and intuitive framework for organizing complex information.
This workshop shows you how to tidy up, connect, and query your data with knowledge graphs. Learn how to organize complex information more effectively and build efficient, research-ready data structures for social science.
Contents: This workshop begins with a conceptual introduction to knowledge graphs and their relevance for social science research. We will explore what knowledge graphs are, how they differ from traditional data structures, and why they are particularly well-suited for handling complex, relational datasets.
We will discuss how knowledge graphs:
Integrate diverse and heterogeneous data sources
Make relationships between entities explicit and analyzable
Enhance transparency, reproducibility, and long-term data usability
After this conceptual foundation, the workshop moves into a hands-on session focused on practical implementation. Participants will learn how to:
Build a simple knowledge graph from their own data (e.g., CSV files, Excel spreadsheets, relational databases)
Link internal data to external sources
Apply constraint checks to improve data quality and consistency
Write and execute basic Cypher queries to explore entities, relationships, and patterns
This in-person workshop emphasizes practical application and active learning. By the end of the session, participants will have the core skills needed to design, construct, and query their own knowledge graph, and to integrate this approach into their research workflows.
Prerequisites: No prior knowledge necessary. If you want to follow along the practical sessions on your computer, please have Python installed and if you want, you can already download Neo4j Desktop.
Joris Mulder (Tilburg University)
Wed, 5 August 2026, 9:00am–12:30pm
Background: Due to technological developments, relational event data sequences are becoming increasingly easy to acquire. Examples include sequences of interstate disputes between countries or exchanges between political actors. These time-stamped data contain information about who interacts with whom in a network and when. Additional information may also be available, such as event duration, event type, or event weight. Such high-resolution data make it possible to gain a deeper understanding of complex social interaction dynamics. They allow researchers to address questions such as what drives interactions in temporal social networks, how long past events continue to shape future interactions, how interaction behaviour differs across actors and dyads, and how social interaction changes over time.
Contents: This workshop introduces the relational event model (REM) as a framework for addressing such questions. It discusses the methodological foundations of the model and provides hands-on training in the use of the remverse software suite in R, which includes packages such as remify, remstats, remstimate, and remulate. Participants will learn how to process relational event data, compute statistics based on past events, fit REMs, and simulate relational event sequences from fitted models for prediction and goodness-of-fit assessment. The workshop is intended for researchers who want to analyse fine-grained interaction dynamics using relational data using a principled statistical framework.
Prerequisites: Some experience with statistical modelling and R is recommended.
Joris Mulder (Tilburg University)
Wed, 5 August 2026, 1:30pm–5:00pm
Background: Many scientific theories make predictions that go beyond whether an effect is present or absent. Rather, they often imply specific relationships among multiple key parameters, expressed through equality and order (one-sided) constraints. For example, in studying information exchange in a political network, one might expect the effect of shared committee memberships to be strongest, followed by influence attribution and then preference similarity, with all effects being positive (i.e., "committee > influence > similarity > 0"). Once multiple competing hypotheses have been formulated, the goal is to quantify the relative evidence for these competing scientific expectations. In practice, however, researchers often evaluate such expectations only indirectly through multiple post-hoc analyses of separate parameters. This has several well-known drawbacks: (i) it provides only indirect evidence for the theories of interest, (ii) it requires multiplicity corrections that reduce power, and (iii) it can encourage selective emphasis on statistically significant results.
Contents: This workshop introduces a direct approach to testing competing hypotheses of this kind using Bayes factors. Rather than testing parameters one by one, the focus is on comparing substantive hypotheses as coherent theoretical statements and quantifying the relative evidence for each. The workshop has an applied focus and centres on the use of the BFpack package in R, with brief attention to the BFpack module in JASP where relevant. Participants will learn how to formulate constrained hypotheses, understand the conceptual basis of the Bayes factor and its computation, interpret the results of Bayesian hypothesis tests, and report findings in a transparent and theory-driven way. The emphasis will be on practical implementation in commonly used models, including regression models as well as models used for relational and network data, such as exponential random graph models, network autocorrelation models, and relational event models. The workshop is intended for researchers who want to move beyond traditional post-hoc testing and evaluate theoretically motivated expectations more directly.
Prerequisites: Some experience with statistical modelling and R is recommended.
Lunch and coffee breaks will be provided. The registration desk is open from 8:30am.
Workshop rates: £60 (full rate) or £40 (discounted rate for students and postdocs) per workshop.