Large Language Models, Artificial Intelligence and the Future of Law
Session 14: If AI causes harm, who is responsible?
Session 14: If AI causes harm, who is responsible?
IndiaAI 2023, Expert Group to Meity, https://indiaai.s3.ap-south-1.amazonaws.com/docs/IndiaAI+Expert+Group+Report-First+Edition.pdf
AIFORALL - Approach Document for India Part 1 – Principles for Responsible AI
March 2024 Advisory on AI
But all of this is only about whether there is liability for AI harm and how to regulate it at the level of development?
What about who is liable in case AI causes harm?
Please attempt the survey below:
Yet, Honore (1999) suggests that fault is different from responsibility, and that society may place undue burdens of care.
Under the AI Act:
Disclosure of AI Use: Informing individuals when an AI system is being used in decision-making processes that impact them.
Explanation of Decision-making Process: Providing information on the logic involved, as well as the significance and consequences of the AI-driven decision.
Explanation Quality: Ensuring that explanations are comprehensible to the intended audience, using clear and accessible language.
Yet, These are AI systems where the decision-making process is not easily understandable by humans, often due to the complexity of the underlying algorithms and the large amounts of data they process. For black box systems, achieving explainability can be challenging.
The AI Act obliges providers of such models to disclose certain information to downstream system providers. Model providers additionally need to have policies in place to ensure that that they respect copyright law when training their models.
General purpose AI models that were trained using a total computing power of more than 10^25 FLOPs are considered to carry systemic risks, given that models trained with larger compute tend to be more powerful. The AI Office (established within the Commission) may update this threshold in light of technological advances, and may furthermore in specific cases designate other models as such based on further criteria (e.g. number of users, or the degree of autonomy of the model). Providers of models with systemic risks are therefore mandated to assess and mitigate risks, report serious incidents, conduct state-of-the-art tests and model evaluations, ensure cybersecurity and provide information on the energy consumption of their models.
Some Examples of Liability issues
1. Autonomous Weapons and Identification Systems
Autonomous weapons systems can make decisions to engage targets without human intervention. This raises significant ethical and legal concerns about accountability in cases of unlawful killings or war crimes.
Current Laws:
International humanitarian law governs the use of force in armed conflicts but was not designed to address the challenges posed by autonomous systems. Current treaties do not specifically regulate AWS. However customary international humanitarian law principles include:
The principle of distinction is a cornerstone of IHL, requiring parties to a conflict to distinguish at all times between combatants and non-combatants. Autonomous robots must be capable of distinguishing between military targets and protected civilian objects or personnel to comply with this principle.
This principle prohibits attacks that may cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated. The application of this principle to autonomous robots is challenging, as it requires complex decision-making about the value of military objectives versus potential civilian harm.
Command Responsibility
Command responsibility is a legal doctrine in international law that holds commanding officers and superior officers responsible for war crimes committed by their subordinates. It is also known as superior responsibility. The first legal implementations of command responsibility are found in the Hague Conventions IV and X (1907)
2. Self Driving Cars
Self-driving cars promise to revolutionize transportation but introduce significant legal challenges, particularly in accidents involving human injuries. Determining liability when human oversight is minimal is a complex issue.
Current Laws
In countries like the USA, regulations for self-driving cars are still in development. Some states have enacted laws to govern the testing and deployment of autonomous vehicles, but comprehensive federal legislation is lacking.
(1) Where—
(a) an accident is caused by an automated vehicle when driving itself on a road or other public place in Great Britain,
(b) the vehicle is insured at the time of the accident, and
(c) an insured person or any other person suffers damage as a result of the accident,
the insurer is liable for that damage.
(2) Where—
(a) an accident is caused by an automated vehicle when driving itself on a road or other public place in Great Britain,
(b) the vehicle is not insured at the time of the accident,
(c) section 143 of the Road Traffic Act 1988 (users of motor vehicles to be insured or secured against third-party risks) does not apply to the vehicle at that time—
(i) because of section 144(2) of that Act (exemption for public bodies etc), or
(ii) because the vehicle is in the public service of the Crown, and
(d) a person suffers damage as a result of the accident,
the owner of the vehicle is liable for that damage.
— Automated and Electric Vehicles Act 2018 (UK)
3. Social Media Generation
Social media algorithms that curate and recommend content can inadvertently promote misinformation, hate speech, or harmful content, influencing public opinion and behaviour significantly. Further several AI can auto-generate social media content.
Current Laws
Laws such as Section 230 of the Communications Decency Act in the USA provide broad immunity to platforms for content posted by third parties. In India, Section 79 of the IT Act reads that intermediaries are immune from hosting third-party data, information, or communication, provided that they observe ‘due diligence while discharging [their] duties’ under the Act – which are provided under the IT Rules. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
4. Medical Diagnosis
AI applications in healthcare, such as diagnostic tools or treatment recommendation systems, can significantly impact patient outcomes. Errors or biases in these systems can lead to incorrect diagnoses or treatment plans. From 2015 through 2020 alone, the European Union approved 224 medical AI tools. FDA has approved a total of 692 devices. As of October 19, 2023, no device has been authorized that uses generative AI or artificial general intelligence (AGI) or is powered by large language models.
Current Laws
Healthcare regulations like HIPAA in the USA govern patient data privacy but do not directly address AI's role in clinical decisions. Medical malpractice laws apply, but attributing negligence to a specific entity can be complex. We are already seeing complex cases of liability in the insurance industry.
Are there any general rules on AI liability?
The AI Liability Directive and the Product Liability Directive provide some guidance on liability rules.
Under PLD:
AI system developers are strictly liable if their AI systems cause harm to a person or data.
Under AILD:
Rebuttable presumption of causal link between defendant (usually AI provider) fault and act or omission of AI system giving rise to the damage
Rebuttable presumption only applies to non-HRAIS where it is excessively difficult for the claimant to prove the causal link
Where it concerns a damages claim against a HRAIS provider, a breach of duty is only established if, taking into account HRAIS risk management system, the provider breached their AIA obligations.