Fill in and submit the form below and force your social network to show you what they have on you and demand what’s yours.
You can read the entire argument below.
Exercise your right of access to what your network says about you.
Your profile belongs to you!
I. SUMMARY:
In this article we argue that the artificial intelligence used in social media is essentially a voluntary and intentional human action regarding the effects on its users and, therefore, the requirements of civil and penal liability remain fully applicable to it, even if adaptations to the new aspects is needed.
But, as to this day, the qualifications and intentions of inducing behaviors made by social media persist mostly secret, it remains difficult to apply the Law to AI that runs on social media.
This article advocates that the users of social media platforms, as holders of personal data, should exercise their right of access to the logic involved in the automatic processing of their data and the consequences associated with the qualifications that the networks make on their person.
Once acceding to the algorithm outputs of one’s profiling, the door is opened for making platforms accountable and for efficient and effective regulation.
II. THE QUESTION
Social media have brought new ways of life to societies. They brought people together, launched artists, fostered knowledge, innovation, and science. They unleashed millions of new businesses and entrepreneurs, who made themselves in this ecosystem.
But nefarious or tragic events are also attributed to social media, such as the massacre in Myanmar for incitement to racial hatred, the manipulation of electoral processes, disinformation campaigns and divisionism; the stimulation of addiction and extreme and self-destructive behaviors, suicide and self-harm among young adolescents and preadolescents; or even deaths from participation in dangerous "challenges" propagated on the networks.
These events have remained unpunished and even today occur essentially free of corrective and punitive measures.
Social media platforms allege in their defense that they are neither responsible nor liable for the content that their users receive in their personal accounts because these contents are not produced by the platform, but mostly by the users themselves. Just as are the users, also, who 'choose' the contents they see because, through their online behavior, they attract the content that meets the preferences expressed by them.
According to this perspective, the companies that own the platforms would be oblivious about the nefarious contents that run on their platforms. And, as alien and aloof, they could not be held responsible for the behavior or effects triggered by the content they distribute, even if, in other circumstances, the distribution of such content and the induced behavioral effects were, in various ways, illegal, criminal, or simply morally intolerable.
This is how platforms have lived for the good two last decades in a 'no-man's land' from the point of view of the laws applicable to the protection of people's rights.
This is, not surprisingly, the corporate narrative of the digital industry. But it's a fallacy because willful, intentional human action is at the heart of social media artificial intelligence.
III. VOLUNTARY AND INTENTIONAL HUMAN ACTION AT THE HEART OF SOCIAL MEDIA ARTIFICIAL INTELLIGENCE
Introduction
Computer algorithms running on powerful computers process zillions of data points that unveil people's character, preferences, emotional states, desires, fears and even neuroses, unraveling the most intimate layers of being to define the baits – or hooks - that most increase attention, to allow to sell more advertising, generate more purchase conversions rates. Or, in another segment, the baits that more effectively manipulate electoral, social, or political behaviors, depending on the objective of the client to whom the data is sold.
It’s the Hook Model on which this industry lies.
What Facebook's algorithm says about us
The Netflix documentary film The Social Dilemma (https://www.thesocialdilemma.com/) notes the detail with which Facebook analyzes people.
Facebook labels people according to various physical and socioeconomic attributes. In the example of the documentary, we can see the following tags: male, cisgender, single, searching, below average height, wealthy, athletic.
It also identifies this person's fears as being social isolation, rejection, snakes, and public speaking; and life goals - to succeed academically, having a relationship and be in good physical shape.
The platform also draws a personality profile using the OCEAN model, which analyzes the person in five categories corresponding to each letter of the acronym: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism.
The granularity of the analysis goes to the point of knowing and recording the actual emotional state or mood: whether the person is feeling lonely, nervous, bored, focused, excited, tired, angry, different from others or asleep.
Like many others, Facebook tracks people across the Internet, recording not only their posts and interactions on their network, but also their search activity and viewed websites or the frequency of search keywords, as well as their activity in other apps, using Internet trackers.
Hoping to conjecture what lessons Facebook draws from the label of ‘short person’, we consulted what a 1991 scientific paper entitled Psychological Impact of Significantly Short Stature by P.T. SIEGEL, R. CLOPPER, B. STABLER says about it. Pursuant to the authors, "The problems associated with short stature include prejudices about being tall, poor performance in competition with siblings and peers, failure to acquire developmental skills due to juvenilization and difficulty in dealing with the physical environment."
We don't know what extrapolations Facebook withdraws from the 'low person' label, but we would like to know how it uses this information to its advantage and that of its business customers, commercial or political advertisers, for what purposes and with what results.
The volitional and intentional element at the core of the computer algorithm
An algorithm is a finite sequence of executable actions that aim to obtain a solution to a certain problem, as if it were a cooking recipe. These actions are performed by human instruction, use inputs, and produce outputs pursuing a predetermined goal.
The components of algorithmic intelligence built by developers (programmers) include notation, selection of data sets, labeling, vectors, instructions, word embedding, and the definition of success criteria. Success consists in the achievement of the pre-defined objectives for the computer operation.
The construction by which a human being establishes rules of correlation between the elements x and y in such a way that the result z is produced, is a voluntary human act like any other, as when a pharmaceutical chemist manipulates molecules to produce an active ingredient.
The collection of user data (input), the establishment of correlation rules between such data and physical, psychic, and social characteristics, the selection of the contents that, with those characteristics, will keep that person glued to the screen, and the triggering of actions on his part constitutes a human, voluntary and intentional act as to its effects.
Calling this action, or intelligence, artificial should not distract us from the fact that it is the product of human action, because the content that reaches the user is the product of the action of the owner of the process of selecting such content, and not a product of chance.
An industry of addition and manipulation of the mind by design
The contents that are given to the user to see are not purely random because that would not maximize advertising revenue: they are selected to maximize the time he keeps its attention attached to the screen, so that in that state, the company launches an instant auction to advertisers, and awards the advertising message to the highest bidder.
This extractive industry of the mind penetrates deep into the core of the individual with powerful computational machines equipped with algorithmic intelligence that strip and watch people capturing their attention and making them addicted to the dopamine shots of each positive reinforcement rendered by a new Like or a new follower.
It is the attention economy of surveillance capitalism.
The Masters of Silicon Valley
In Stanford's Persuasive Technology Lab course, students learn how they can take everything that is known about human psychology and apply it to technology to induce behaviors. Students learn to dig deep into the users' brain stem and plant in their minds the hooks that create the user's unconscious habit of staying on the screen. This course is a must among the employees of Silicon Valley Internet companies.
Engineers, mathematicians, and computer scientists team up in growth hacking departments, whose job description is to hack people’s minds so they can get more users, more sign-ups, more engagement time with the network. They develop all sorts of positive intermittent reinforcement techniques (of which receiving a like is an example). These techniques program a person, on a deep psychic level, without her realizing it, by injecting bursts of dopamine which she then becomes dependent on.
Therefore, in a technical-legal sense, there is no doubt that voluntary action is at the heart of artificial intelligence operated by algorithms in social networks (without prejudice to discussing topics on attribution of liability, such as the links of risk and predictability of the results of the conducts).
IV. THE SCRUTINY OF ILLICITNESS AND GUILT
Where are the limits?
All is well when everything goes well. And when not?
Tristan Harris, a former vice president of ethics at Google who is now one of the most prominent activists for digital human rights defense, at the Centre for Humane Technology, which he founded, regretful of the harm being done to children, once said: "In this industry we dive the deepest possible into the brain stem and take ownership of the children's sense of self-worth and identity."
For there to be subjective responsibility (that which requires guilt as a requirement for the agent to be subject to the obligation to compensate or to some sort of punishment) it is necessary to be in the presence of an illicit, intentional, harmful, and voluntary act that proves causal for the production of the damage.
Unlawfulness in the programming of algorithms
Illicit conduct, consisting in the injury of legally protected values or interests, can occur at various 'moments' of this process, such as right from the moment of the extraction of the inputs: is the collection lawful and the consents obtained for the collection of the data extracted by the networks, in all its immense vastness, depth and intimacy, licit?
The unlawfulness can be analyzed in the processing of the data, or in relation to the attributes the user is labeled with; with what logic and objectives; what techniques of the science of psychology are used to debauch the identity of the person and exert behavioral manipulation, with what results? What are the behaviors the platform is inducing? Or, what results of behavioral manipulation are envisaged by third parties to whom the platform sells the users’ data points and who are given access to such users?
In this journey, are the actions of the platform and its direct or indirect results allowed by law, or are they prohibited?
Do they violate rights, are they abusive, or obtained in bad faith? Or are they lawful, legitimate, and obtained in good faith?
Are they within the limits of a right and the exercise of said right? Do they respect the boundaries of the bona fide rule in the execution of contracts, or do they exceed any such boundaries?
And are they compatible with consumer protection laws, with child protection legislation, with advertising legislation, with media law, with the criminal code (that foresees some crimes of incitement)?
Intent or negligence in the development and implementation of algorithms of social networks
The debate around guilt discusses whether the action or its outcome were desired by the agent.
There are several modalities of imputation of the act or its result to the will of the agent.
Is the will of the developer consciously or deliberately directed to the results of the action (direct intent)?
Another hypothesis: the agent has recognized in his mind the possibility of a given result of his action, having predicted this result as a possible consequence of the operation of the algorithm, and, although he does not want it directly as a result of his action, he nevertheless accepts the possibility that it will lead to its occurrence, not deterring from the action that can lead to its consummation (dolus eventualis). Or the agent may admit a certain result as a necessary or inescapable consequence of his act, and, despite not being his intended result, he conforms to its ineluctability and decides, nevertheless, to act.
For instance, it will be interesting to discuss in this light whether the offer of beautification filters to minors is susceptible of triggering civil or criminal liability of companies for the development of the so-called "Snapchat Dysmorphia": people (mainly young girls) who develop a pathological need to seek facial surgery to become as they look with beautification filters (beautify-me).
Even if we admit that the platform did not directly wish for the girls to suffer from this dysphoria, it is difficult not to blame them for this consequence, with intent or negligence (at the very least, gross negligence) because a person of normal diligence in the concrete position of the agent, who probably even studied persuasive technology at Standford, could not reasonably be unaware of such a disease as a possible consequence and did nothing to prevent this foreseeable risk from being consummated.
V. ACTING: TAKING OWNERSHIP OF OUR ALGORITHMIC PORTRAIT
Article 15 of the GDPR
The General Data Protection Regulation enshrines the right of EU resident citizens to access their personal data (Article 15). The right to access personal data includes the right to access the logic involved in any automatic processing of personal data and, at least in cases of profiling, also the right to know the consequences of such processing.
If users exercise the right conferred on them in Article 15 of the GDPR and require social networks to access their portrait, the algorithmic logic that governs their processing and the consequences of the processing of that AI, they will be able to identify and prove the prohibited, abusive, or immoral areas of collection, processing, and use.
They will be able to exercise other rights enshrined in the GDPR, such as removal.
They will also be able to assess the compatibility between the privacy policy and the given consents, and the effective use that is made of their information.
And, outside the scope of the GDPR, the user holds several other rights in the field of Law of Obligations and in the Law of Contracts.
Admittedly, they may retrieve, in their favor, the advertising revenues obtained illicitly, or revenues obtained from the sale of their personal information to third parties for other purposes, as in the Cambridge Analytica Case.
Let us remember that the data subject is a party to a contract: the one that is established between the user and the social network company. As such, the data subject has the right to bona fide execution of the contract and may claim the legal consequences resulting from the breach of that duty by the other party and plead contract liability of the platform, in addition to non-contractual liability.
Within the framework of the Law of Obligations, unjust enrichment may occur, within the meaning and with the consequences provided for in Article 473 of the Portuguese Civil Code:
He who, without cause, enriches himself at the expense of another is obliged to repay that which he has unjustly indulged himself with.
The obligation to repay, by unjust enrichment, has as its object in particular what is wrongly received, or what is received by virtue of a cause which has ceased to exist or in view of an effect which has not occurred.
Each user of social networks is the owner of their personal data and the scope of the permissions for its use.
And this contract relationship is dynamic, bilateral, and bidirectional: it is subject to the vicissitudes that result from the action of either party.
Users and owners of the personal data on which their algorithmic portrait is constructed can and should take possession of what is theirs: they must demand to know what the social network has, knows, and says about them; with what aims, in obedience to what logic.
2. The civic movement #CALL4ALGOTRANSPARENCY
The CONSUMER ALLIANCE 4 ALGORITHM TRANSPARENCY – #CALL4ALGOTRANSPARENCY – is a consumer initiative, supported by a European Consumer Association (www.IusOmnibus.eu) that makes available online a standardized text with a request for access to data under Article 15 of the GDPR.
Interested parties who subscribe to this request in the form made available online, give authorization for this entity to refer this request to the selected social networks, and take the necessary steps to make this access effective.
Requests are made electronically, similarly to that of an online petition, where users can subscribe to the relevant template, providing their identification and account link, and mandating IUS OMNIBUS - CONSUMER ALLIANCE to submit (by postal or electronic mail) each request electronically.
OpenUpAlgorithm.eu