Digital Wildfire: (Mis)information flows, propagation and responsible governance

Research project November 2014 - November 2016

Research Councils UK “Global Uncertainties” programme

See our News and Events page for regular project updates



Project overview

Digital Wildfire (Nov 2014 to Nov 2016) is an ESRC funded project that investigates the spread of harmful content on social media and identifies opportunities for the responsible governance of digital social spaces. As a collaborative team of computer scientists, social scientists and ethicists, we investigate the impacts that content such as rumour, hate speech and malicious campaigns can have on individuals, groups and communities and examine social media data to identify forms of ‘self-governance’ through which social media users can manage their own and others’ online behaviours. We also draw on the perspectives other key players such as social media companies, legislators, the police, civil liberties groups and educators to explore ways in which the spread of harmful social media content might be prevented, limited or managed. 


Digital Wildfire is an interdisciplinary collaboration between the Universities of Oxford, Warwick, Cardiff and De Montfort. Read more about the project background and workpackages here


The project aims to advance the responsible governance of social media. In addition to reporting our findings to academic audiences - see some of our papers here and here - we have written articles for The Conversation and EmergencyJournalism.net In January 2016 we held a showcase workshop in which we presented some of our project findings and invited a series of speakers to explore issues relating to the spread of harmful content on social media and the responsible governance of digital social spaces. Our artist-in-residence Barbara Gorayska has produced two paintings designed to promote a creative understanding of digital wildfires to broad audiences. We have worked closely with schools and young people to find ways to increase digital maturity and resilience among young social media users. We have run two youth panels, created 2 sets of teaching and learning materials and produced two video animations #TakeCareOfYourDigitalSelf and Keeping Social Media Social.


Key findings

The project is now in its reporting phase and we are happy to announce our major findings. You can watch them in this slide show and read them below.


 Our key conclusions relate to:

 

1)    The scale and breadth of the ‘problem’ of harmful content spreading on social media

Through our interviews, observations and surveys we have found that a very wide range of agencies are now having to deal with rapidly spreading social media content that is in some way inflammatory, antagonistic or provocative. This includes the police, councils, news agencies, anti-harassment organisations, anti-bullying groups and schools.

2)    The complexities and limitations of current governance

Various mechanisms currently exist to deal with social media content and/or its impact but these tend to have practical limitations. For instance, the law and governance mechanisms enacted by social media platforms (removing posts, suspending accounts etc.) are mostly retrospective – dealing with content after it has already spread and caused harm. They also tend to act on individual posts or users, rather than the multiple posts and users associated with a digital wildfire.

 

3)    The potential value of counter speech and user self-governance

In contrast to other governance mechanisms, we find that user self-governance has some capacity to be prospective and limit the spread of harmful content in real time. The posting of counter speech to disagree with an inflammatory comment or unsubstantiated rumour can serve to encourage others to reflect carefully before sharing or forwarding content. It also upholds rather than undermines freedom of speech. Our analysis of social media content (involving qualitative and computational approaches) suggests that multiple voices of disagreement on a Twitter conversation and function to quell hate speech. Click here for further information. 



4)    The value of education and engagement

When we ask respondents to tell us what they feel are appropriate ways forward for the responsible governance of social media, they frequently emphasise the idea of communities working together and the value of fostering responsibility on social media through education.

Recognising the value of education, we have focused much of our project impact activities on engaging with and providing resources for educators and young people. We have run two youth panel competitions and produced two sets of educational materials for secondary schools – focusing on e-safety and digital citizenship. We have also co-produced two video animations - #TakeCareOfYourDigitalSelf and Keeping Social Media Social.

 

Further questions and next steps

The next step for our project is the development of a ‘toolkit’ that will help users to navigate through social media policy and practice. This will be available online and more information will be posted here when it is ready. 

We are also continuing to look into certain methodological questions raised by the project. These questions challenge us to consider how we can analyse social media content in ways that are both accurate and ethical. 

The contributions of our project will continue through a number of activities. These include an impact acceleration project being lead by team member Rob Procter at the University of Warwick and Marina Jirotka’s work as specialist advisor to the House of Lords Select Committee on Communications for the Inquiry into Children and the Internet.

 




For more information about our project activities and findings, please contact helena.webb@cs.ox.ac.uk


Full updates about the project are on our News and Events page. You can also follow us on Twitter @EthicsWildfire and watch a variety of project videos on our You Tube Channel








Oxford University logo