Events‎ > ‎

IPSA Panel July 2016

Congress panel on 'The construction and use of expert surveys in the social sciences'

Where? IPSA World Congress, Poznan, Poland.

Chairs: Pippa Norris (Harvard and Sydney Universities) and Svend-Erik Skaaning (Aarhus) 

Co-sponsored by the Electoral Integrity Project (EIP) and the Variety of Democracies (V-Dem) project

Theme In this panel in the IPSA World Congress, international experts present papers that  i) use expert surveys in comparative politics and related social sciences to develop new ways of testing the validity of expert survey data; ii) use meta analysis to compare expert perceptual surveys standards, methods, and techniques in terms of validity, reliability and legitimacy to other sources of information; and/or (iii) propose codes of conduct and good practice for conducting expert surveys. For details, see below.


Indices and datasets derived from expert surveys have become increasingly common in comparative social science, in risk analysis by private sector organizations, in evaluation research, and among NGOs and policy makers (Meyer & Booker 1991). Expert surveys are often used to deepen understanding of, among others, the left-right position of political parties and news media outlets, the perceived extent of corruption or bribe-paying, and the quality of democratic governance. Expert surveys increasingly supplement alternative sources of information, such as citizen mass surveys, event analysis of media reports, and official statistics.

This data collection technique has been applied to diverse research topics such as the series of studies on party and policy positioning (Laver and Hunt 1992; Huber and Inglehart 1995; Saiegh 2009; Laver, Benoit, and Sauger 2006; McElroy and Benoit, 2007), the power of prime ministers (O’Malley 2007), evaluations of electoral systems (Bowler, Farrell, and Pettitt 2005); policy constraints horizons (Warwick 2005); campaign communications (Lileker, Steta and Tencher 2015); human rights and democracy (Landman and Carvalho 2010), and the quality of public administration (Teorell, Dahlstrom and Dahlberg 2011). Expert surveys have been widely used in research on corruption - the Corruption Perceptions Index (Transparency International 2013; Global Integrity); measuring democracy since the 1900s -Varieties of Democracy (Coppedge et al. 2011)- and electoral integrity (Norris, 2014; Norris, 2015; Martinez i Coma and Van Ham, 2015). The World Bank Institute Good Governance indicators combine an extensive range of expert perceptual surveys drawn from the public and private sectors. Indeed among the mainstream indicators of democracy, Freedom House’s estimates of political rights and civil liberties, Polity IV’s classification of autocracies and democracies, and the Economist Intelligence Unit’s estimates of democracy are all, in different ways, dependent upon expert judgments.

Expert surveys seem especially useful for measuring complex concepts that require expert knowledge and evaluative judgments; and for measuring phenomena for which alternative sources of information are scarce (Schedler 2012). Yet, expert surveys are not risk free and scholars have pointed out their limitations (Budge, 2000; Mair 2001; Steenbergen and Marks, 2007). Moreover, in contrast to mass social surveys, we still lack a common methodology to construct such surveys, as well as agreed technical standards and codes of good practice. There has been heated debate about the pros and cons of methods used to evaluate the spatial positions of party policies, and about the use of governance indicators more generally, but by contrast there has been remarkably little discussion about the challenges of validity, reliability, and legitimacy facing the construction of expert perceptual surveys. Yet it is critical to consider these issues given the lack of a clear conceptualization and sampling universe of ‘experts’, contrasting selection procedures and reliance upon domestic and international experts, variations in the number of respondents and publication of confidence intervals, and lack of consistent standards in levels of transparency and the provision of technical information. Moreover, more research needs to be done on how to evaluate the consequences of expert and context heterogeneity on the validity of expert judgments (Martinez i Coma and van Ham 2015), for example by using item response models to test and correct for expert heterogeneity (Pemstein et al. 2015), and using techniques such as ‘anchoring vignettes’ (King & Wand 2007) or ‘bridge coders’ (V-Dem) to test and correct for context heterogeneity. 


RC23 Elections, Citizens and Parties

Chair: Pippa Norris


1. Aggregating to Democracy: Generating Indices from Expert-Coded and Paired Comparison Data
Prof. Svend-Erik Skaaning, Dr. Brigitte Zimmerman, Dr. Michael Coppedge, Dr. Staffan Lindberg

2. Do experts judge elections differently in different contexts? The cross-national comparability of expert judgments on election integrity
Dr. Carolien Van Ham, Dr. Ferran Martinez i Coma

3. Complementary Strategies of Validation: Assessing the Validity of V-Dem Corruption Measures
Prof. Jan Teorell, Dr. Kelly McMann, Dr. Brigitte Zimmerman, Dr. Dan Pemstein

4. Do experts know how much they know? Do statistical models? Do we care?
Dr. Kyle Marquardt, Dr. Dan Pemstein, Mr. Eitan Tzelgov

Discussant: Alessandro Nai (University of Sydney)

More details: 


Benoit, Ken and Michael Laver. 2005. Party Policy in Modern Democracies. London: Routledge.

Bowler, Shaun; David Farrel and Robin Pettitt. 2005. ‘Expert opinion on electoral systems: So which electoral system is best?’ Journal of Elections, Public Opinion and Parties 15(1): 3-19.

Budge, Ian. (2000). ‘Expert judgments of party policy positions: Uses and limitations in political research.’ European Journal of Political Research 37(1): 103–113.

Transparency International. 2013. Corruption Perception Index.

Huber, John and Inglehart, Ronald. 1995. ‘Expert Interpretations of Party Space and Party Locations in 42 Societies’, Party Politics 1:73-111.

King,G. &Wand, J. (2007). Comparing incomparable survey responses: Evaluating and selecting anchoring vignettes. Political Analysis 15(1): 46–66.

Landman, Todd and Edzia Carvalho. 2010. Measuring Human Rights. London: Routledge.

Laver, Michael and Ben Hunt, B. (1992). Party and Policy Competition. London: Routledge.

Laver, Michael, Kenneth Benoit, and Nicolas Sauger. 2006. ‘Policy Competition in the 2002 French Legislative and Presidential Elections.’ European Journal of Political Research 45: 667-697.

Lilleker, Darren., Stetka, V. and Tenscher, J., 2015. Towards hypermedia campaigning? Perceptions of new media's importance for campaigning by party strategists in comparative perspective. Information, Communication and Society, 18 (7), 747-765.

Mair, Peter. (2001). ‘Searching for the position of political actors: A review of approaches and a critical evaluation of expert surveys.’ In M. Laver (ed.), Estimating the policy positions of political actors. London: Routledge.

Martinez i Coma, Ferran; Van Ham, Carolien . 2015. ‘Can experts judge elections? Testing the validity of expert judgments for measuring election integrity.’ European Journal of Political Research 54 (2): 305-325.

McElroy, Gail and Kenneth Benoit. 2007. ‘Party groups and policy positions in the European Parliament.’ Party Politics 13:5-28. Meyer, M. & Booker, J. (1991). Eliciting and analyzing expert judgment: A practical guide. London: Academic Press.

O’Malley, Eoin. 2007. The Power of Prime Ministers: Results of an Expert Survey’ International Political Science Review 28(1):7-27.

Pemstein, D.; Tzelgov, E.; Wang, Y. 2015. “Evaluating and Improving Item Response Theory Models for Cross-National Expert Surveys”. Varieties of Democracy Institute: Working Paper Series No. 1.

Saiegh, Sebastian. 2009. ‘Recovering a basic space from elite surveys: Evidence from Latin America.’ Legislative Studies Quarterly 34(1):117-145.

Schedler, Andreas. 2012. ‘Judgment and Measurement in Political Science’ Perspectives on Politics 10(1):21-36.

Steenbergen, Marco R.  and Gary Marks. 2007.  ‘Evaluating expert judgments.’ European Journal of Political Research 46: 347–366.

Teorell, Jan. Carl Dahlström & Stefan Dahlberg. 2011. The QoG Expert Survey Dataset. University of Gothenburg: The Quality of Government Institute.

Warwick, Paul. 2005. ‘Do Policy Horizons Structure the Formation of Parliamentary Governments?: The Evidence from an Expert Survey’ American Journal of Political Science 49(2):373-387.