Identifying the questions and stakeholders of Responsible AI
Identifying the questions and stakeholders of Responsible AI
Photo by Mindaugas Vitkus on Unsplash
8-9 September 2025 - Presenting at UKAIRS in Newcastle
23-25 September 2025 - Presenting at Creative AI Exploration and Imagination: Innovative Methods for Alternative Digital Futures at the University of Leipzig
3-5 December 2025 - FRAIM will be at Fantastic Futures 2025 at the British Library!
17 July 2025 Workshop with Sheffield City Council on Responsible AI in Local Government. Read more
13 February 2025 BRAIDxIDI seminar presentation on FRAIM at the University of Edinburgh. Read more
8 January 2025 New article published in Royal Society Open Science on a competency-based approach to AI Thinking. Read more
17 December 2024 New article published in Science Museum Group Journal about AI and museums. Read more
10 December 2024 Launch of Constant Washing Machine by Blast Theory in Brighton! Read more
Rick Payne and team / Better Images of AI / AI is... / CC-BY 4.0
The Framing Responsible AI Implementation and Management (FRAIM) project is bringing together cross-sector perspectives on organisational RAI policy and process to scope key stakeholders, shared values, and actionable research needs for building the evidence base on implementing and managing RAI.
Through work built on current knowledge and best practices in multi-stakeholder RAI research, we will:
(1) Connect a community of stakeholder organisations within the RAI ecosystem;
(2) Identify key directions for improving organisational RAI policy and process; and
(3) Identify shared values and common questions to guide RAI research and intervention.
We will embed artistic reflection and response to these contributions throughout the project, in collaboration with the Open Data Institute's Data as Culture programme.
FRAIM is funded as part of the BRAID programme under UK Research and Innovation, Arts and Humanities Research Council award number AH/Z505596/1.
We have partnered with organisations representing example key areas in which AI use will significantly impact people’s lives, including local policy, information access, and cultural enrichment.
Our scoping work with these partners will provide a strong foundation for future development of practices and interventions to enable an ecosystem approach to RAI, and creatively and critically examine organisational implementation and management of RAI.
The trust that the public place in libraries is hugely important, so Responsible AI is a topic of great interest as libraries decide when and how to use AI-based technologies. The British Library’s strategic plan – Knowledge Matters – shows our commitment to supporting and stimulating research of all kinds. As the national research library, we have a commitment to supporting the active creation of new knowledge, and the FRAIM project builds on and amplifies our work with AI on the AHRC-funded Living with Machines project over 2018-23, providing opportunities to shape new research directions and design for responsible AI while allowing us to benefit from external expertise and learn from other project partners.
The FRAIM project is also very timely, as we work on our first AI strategy. Taking part in the steering group and interviews will provide valuable opportunities for reflection while providing insights into best practice for implementing and managing Responsible AI.
Eviden wants to continue to be an ethical partner for customers and influence AI regulation. We recognise that to do this effectively, our teams need to be truly reflective of the socio-political landscape that AI sits in. We champion diversity – in thought, background, experience, and technical ability. We believe that no single body or organisation should be the sole voice for regulatory input and that collaboration between industry, government and academia will be integral to the future of AI. Formal AI regulation needs to be spearheaded through collaborative efforts between government, existing regulatory bodies, industry and academia. We want to contribute to creating a transformative, trustworthy and AI-powered future and that is why we are participating in FRAIM as we see this as an important element in the process and implementing of responsible AI.
One of the most widely heralded advancements in technology currently is the development of Artificial Intelligence (AI) . We know that AI has the potential to bring forward tremendous opportunities for employees of Sheffield City Council in realising productivity gains and contributing to an improved service to the citizens of Sheffield.
We have always and continue to be aware of our responsibilities in how to manage the implementation of any new technological advancement. Our vision and approach is to sensibly and rigorously implement appropriate opportunities presented by AI.
The opportunity to work with specialists at the University of Sheffield in this new area of work is one that utilises the established connections that already exist in the City and brings a high level of expertise to the discussion and actions that we will undertake.
At the ODI, we want to help develop an AI data ecosystem grounded in responsible data practices. Data is the feedstock of AI and must be considered at every stage of AI development and deployment, which is why we're so excited to be part of the FRAIM project. Through our Data as Culture programme, we'll embed an artist co-researcher with the team. They'll work with other FRAIM partners, reflecting on and responding to the themes that emerge from the work. This exciting, multi-disciplinary project has the potential to envision imaginative new approaches to responsible AI implementation and we can't wait to see the results.
Contact us at:
d.r.newman-griffis AT sheffield DOT ac DOT uk