Picture credit: Pedro Sanz, unsplash
Autonomous Systems (AS) are increasingly permeating all aspects of society, and we need to be confident in the safety of AS decision-making. We need not necessarily expect AS to make decisions in the same manner as humans, but if we are to have confidence in the safety of AS decision-making, there are attributes of human decision-making which we can expect reasonably AS to emulate. Similarly, there are aspects of human decision-making which we should not require AS to emulate, and there are aspects of decision-making which we can be confident that AS are superior at when compared to their human counterparts.
In this talk, I’ll present the first part of our research into the safety of AS decision-making research by considering a hypothetical decision-making scenario with which we compare and contrast the mechanics of human and machine decision-making. I’ll present the Puddle Problem, and consider the data, information, and knowledge required for safe AS decision-making.
Matt is a Visiting Fellow at the Centre for Assuring Autonomy at the University of York, where he is currently researching software safety assurance in AI, and the safety of decision-making in autonomy. Matt is also an independent consultant in assuring the safety of complex, socio-technical systems.
Matt’s PhD focussed on identifying the impediments to recognised good practice in software safety assurance; with the aim of identifying, characterising, and eradicating the impediments to the adoption of best practice for software safety assurance