at Julius Glickman Conference Center
Since the first industrial revolution, workplaces have been a highly regulated and governed area of activity. From the early developments of health and safety law to development around working time, the relationship between humans, their employers and their fellow employees has been an important area of intervention. Both regulatory norms and regulatory agencies have been created in this area. The correct way to regulate in this field has been a matter of debate, with governments and courts initially taking a lassaiz faire attitude, and leaving the regulation to the common law of contract, before a more interventionist approach was taken when it became clear that the risks to human health and safety were not being managed appropriately within businesses. Regulation included restrictions on child labour, introduction of health and safety measures such as the need to provide PPE and guards for machines, and provisions around working time.
As static robots were introduced onto production lines, they were required to be guarded like any other tool. But with the development of robotics, and the embodiment of artificial intelligence in robots made to collaborate, the old models of regulation of robots are outdated. New regulatory frameworks, such as the EU’s new Machinery Directive and the Artificial Intelligence Act, have sought to create a regulatory regime appropriate to the changed position of robots in society and the workplace, but have they gone far enough? Human-robot collaboration has the potential to make a huge contribution to the future economic, enabling manufacturing process that bring together the best of both humans and robots. However, in order for this to be the future, we have to ensure that the appropriate regulatory framework is in place, to enable workers to both feel and be safe, and for businesses to have comfort in introducing collaborative robots into their workplaces.
What are the challenges for regulating collaborative robots? What should a regulatory regime that both enables human-robot collaboration and ensures safety and security look like, in terms of both norms and institutions? Should both robot behaviour and data be regulated by specialised instruments or are general instruments sufficient? Should certain types of human-robot collaboration be mandated (for example where the risk to human workers is high) or banned? Who should be responsible for workplace injuries caused by collaborative robots? All of these questions are challenges for policymakers, and may be explored in this workshop.
Richard is Professor of Law, Regulation and Governance at the University of Nottingham. His research focuses on food law, law and technology and consumer law. He is a Co-Investigator in the UKRI funded Trustworthy Autonomous Systems Hub and is currently undertaking research into the ways that technology can improve regulation.
Natalie is an Assistant Professor in Law and Autonomous Systems. Her research focuses on understanding the adoption challenges and implications of emerging technologies, especially Artificial Intelligence (AI) and robotics, in order to responsibly develop innovative strategies to improve the implementation of technology into organizations and society.