Large Language Models, Artificial Intelligence and the Future of Law
Session 11: How do we make ethical AI?
Session 11: How do we make ethical AI?
1. Moratorium or Cessation of AI Development
This viewpoint advocates for halting or severely restricting AI development, particularly around creating superintelligent AI, due to existential risks. This position often invokes the precautionary principle, arguing that in the face of potential risks of such magnitude, the most responsible course of action is to prevent or significantly slow down development until safety can be assured.
2. Opposition to Special Regulation for AI
Regulation could slow down AI development, causing those jurisdictions to fall behind in technological competitiveness. Proponents argue that current laws on privacy, discrimination, and consumer protection are adequate to handle the issues raised by AI.
3. Responsible AI Development
This perspective advocates for the continued development of AI but with thoughtful regulation and ethical guidelines to ensure beneficial outcomes and mitigate risks. Ai developers should build AI while keeping in mind certain core principles of responsible AI.
Several International institutions have tried to frame guidelines and guiding principles:
OECD (2018) - OECD AI Principles
European Union (2019) - Ethics guidelines for trustworthy AI | Shaping Europe’s digital future
IEEE (2019) - Ethically Aligned Design
World Economic Forum (2024) - Artificial Intelligence: Operationalizing Responsible AI
ISO (2023) - Building a responsible AI: How to manage the AI ethics debate
G7 (2023) - G7 AI Principles and Code of Conduct | EY - Global
UNESCO (2020) - Ethics of Artificial Intelligence | UNESCO
Most Countries have also got into the AI regulation game
But what are the "core" principles?
8 Principles of Responsible AI
1. Privacy
AI systems should be designed to collect, store, and process data in ways that respect user privacy and comply with relevant laws, such as GDPR in Europe
2. Accountability
Accountability in AI ensures that there are mechanisms in place to hold designers, operators, and deployers of AI systems responsible for the outcomes. This includes clear documentation of decision-making processes and the establishment of legal and ethical responsibility.
The AI Liability Directive and the Product Liability Directive provide some guidance on liability rules.
3. Safety and Security
AI systems must be safe for users and secure from external attacks. This includes physical safety (e.g., robotic systems) and cybersecurity.
4. Transparency and Explainability
AI should be transparent, meaning its operations should be understandable to users and stakeholders. Explainability refers to the ability of AI systems to explain their decisions and actions in human-understandable terms.
For example, in the Indian elections AI voice clones of politicians being used to talk to voters without telling the voter they are talking to an AI.
5. Fairness and Non-discrimination
AI systems should be designed to avoid unfair bias or discrimination against any individual or group.
6. Human Control of Technology
This principle advocates that humans should maintain control over AI systems, ensuring that machines do not make autonomous decisions without human oversight.
7. Sustainability
Sustainability in AI refers to developing and using AI technologies in a way that considers their environmental impact and promotes ecological health. This includes optimizing energy usage of data centers, using eco-friendly hardware, and designing algorithms that are energy-efficient.
8. Promotion of Human Values
AI should promote and not undermine human values such as dignity, rights, freedoms, and cultural diversity.