Large Language Models, Artificial Intelligence and the Future of Law
Session 17: How do we regulate attempts to change society using AI?
Session 17: How do we regulate attempts to change society using AI?
Can you spot fake news?
Source: https://www.politico.eu/article/people-view-ai-disinformation-perception-elections-charts-openai-chatgpt/
Companies themselves may autogenerate fake news:
And here's what happens if you ask Google Home "is Obama planning a coup?" pic.twitter.com/MzmZqGOOal
— Rory Cellan-Jones (@ruskin147) March 5, 2017
What does the law say?
United States
No federal laws. But several proposed bills:
No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, which would make it illegal to create a digital depiction of a person without their permission. Other proposed legislation includes:
DEEPFAKES Accountability Act: Introduced in the House of Representatives in September 2023, this bill aims to protect national security and provide legal recourse to victims of harmful deepfakes.
Preventing Deepfakes of Intimate Images Act: Introduced by Rep. Joe Morelle in May, this bill would criminalize the non-consensual sharing of deepfakes.
Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act: This bill would allow victims to sue if the creators of deepfakes knew or recklessly disregarded that the victim did not consent to its making.
The United States, has advocated the Department of Homeland Security (DHS) to establish a task force to address digital content forgeries, also known as “deepfakes.” Many states have enacted their own legislation to combat deepfakes.
European Union
The European Union has enforced the Digital Services Act which obligates social media platforms to adhere to labelling obligations, enhancing transparency and aiding users in determining the authenticity of media.
Transparency Reporting (Article 24): Platforms must provide clear and comprehensive information about their content moderation policies, including how they address deepfakes. This transparency allows for public scrutiny and accountability.
Enhanced Content Moderation (Article 25): Platforms must implement "notice and action" mechanisms to allow users to flag illegal content, including deepfakes, and require platforms to act expeditiously upon such notices.
Cooperation with Trusted Flaggers (Article 22): Platforms must establish channels for cooperation with "trusted flaggers," who have expertise in identifying and reporting illegal content, including deepfakes.
Many national governments have additional requirements. Germany's NetzDG law, also known as the Network Enforcement Act, is a German law passed in 2017 that aims to combat hate speech, fake news, and other illegal content online. The NetzDG obliges social media platforms with over 2 million users to remove "clearly illegal" content within 24 hours and all illegal content within 7 days of being reported, or face hefty fines. Deepfakes that violate existing laws, such as those involving defamation, harassment, or incitement to violence, could fall under this category.
United Kingdom
Criminal Justice and Public Order Act 1994 (Amendment): In April 2024, the UK government introduced a new offence within this Act, making it illegal to create a sexually explicit deepfake image or video of someone without their consent. This includes even if there is no intent to share it but solely to cause alarm, humiliation, or distress to the victim.
Online Safety Bill: This bill, currently under consideration, aims to tackle a wider range of online harms, including the sharing of deepfakes. It proposes to criminalize the non-consensual sharing of manufactured intimate images (deepfakes) and strengthen existing offenses related to the sharing of intimate images.
India
Information Technology Act, 2000 (IT Act):
Section 66D: Punishes cheating by personation using computer resources. If a deepfake is used to impersonate someone for fraudulent purposes, this section can be applied.
Section 67: Prohibits publishing or transmitting obscene material in electronic form.
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
In particular, Rule 3(1)(b)(v) i.e. “knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature”
Government Advisory: In November 2023, the Indian government issued an advisory to social media platforms to identify and act against deepfakes and other harmful content.
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023
Fact Check Unit (FCU): The government established a unit authorized to identify fake or misleading information related to its business. Content flagged by the FCU as fake would have to be taken down by intermediaries.
Due Diligence: Intermediaries were mandated to make reasonable efforts to ensure users do not host, display, upload, modify, publish, transmit, store, update, or share information deemed fake or false by the FCU.
Kunal Kamra v. Union of India (2024)
A division bench of the Bombay High Court, comprising Justice Gautam Patel and Justice Neela Gokhale, delivered a split verdict on the matter:
Justice Gautam Patel held that the amendments were unconstitutional as they violated the fundamental right to freedom of speech and expression. Expressed concerns about the potential for misuse of the FCU for censorship and the suppression of legitimate criticism and satire.
Justice Neela Gokhale upheld the validity of the amendments, stating that they were within the government's legislative competence. Viewed the FCU as a necessary tool to combat fake news and misinformation, which could harm public interest.
Due to the split verdict, the case was referred to a third judge, Justice A.S. Chandurkar, to break the deadlock. Justice Chandurkar agreed with the state and held the rules were valid.
MeitY is considering amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), to “explicitly define deepfakes and make it obligatory for all intermediaries to make “reasonable efforts” to not host them”.
See also Rajat Sharma v. Union of India (DHC, 2024)
Mass level psychological and intellectual manipulations are also perpetrated. Social networks have evolved into platforms for the generation & huge propaganda of fake news; this in turn empowers disruptive voices & ideologies with cascading effect.
Twitter Inc. v. Union of India (Kar HC, 2023)
Price Fixing and Manipulation: In a lawsuit, the Federal Trade Commission (FTC) said Amazon used a secret algorithm to determine how much to raise prices in a way competitors would follow and that brought the company $1 billion in revenue.
Identity Fusion and Social Media Bubbles: In a lawsuit, a user challenged Meta's control over user experiences on Facebook, particularly focusing on their newsfeed algorithm and restrictions on third-party tools.
Persuasion and Subliminal Messaging: The Cambridge Analytica scandal involved the unauthorized harvesting of personal data from millions of Facebook users. A researcher, Aleksandr Kogan, created an app called "This Is Your Digital Life" that collected data from users who took a personality quiz. However, the app also harvested data from the users' friends without their explicit consent. This data was then passed to Cambridge Analytica, a political consulting firm, which used it to build psychological profiles of voters and target them with personalized political advertisements during the 2016 US presidential election and the Brexit referendum.
Banning, Shadow Banning and Promotion: In a lawsuit, the plaintiff had multiple Facebook accounts. He posted from one account, but the item didn’t appear when he looked for the item through a different account. He claimed that “Facebook was trying to deceive him into thinking he had posted a publicly visible comment, when in fact the comment was not visible.” Rodrigo de Souza Millan v. Facebook, Inc. (2021)