ETHICS AND ARTIFICIAL INTELLIGENCE: THE DILEMMA FACING TODAY'S PR PRACTITIONER

By Anthony Olabode Ayodele (Chart.PR, FIIM, IDM 2017& 2018)

 

Many argue that what is ethical is not necessarily legal. They may use this argument to play down the importance of ethics, but when placed under the microscope, it doesn’t hold water.

Ethics is to law is like what First Aid is to medical treatment in a Hospital; Ethics points at what the law is trying to achieve.

 It is the forerunner to law; making the breach of law most unlikely, if best ethical practices and norms are applied and properly followed.

As Artificial Intelligence (A.I) increasingly becomes a feature of Public Relations it is essential to look at the ethics of A.I while embedding it in our daily living as the use of Chatbots and whatever becomes the norm by the day.

Artificial Intelligence derives from a programme of what we consider essential in day to day life. In other words, it is what we need and do daily, that we seek artificial intelligence programmes for.

No matter the foolproof nature of an A.I programme its ethical application may still require human judgment every now and then. It is the failure to recognise this required human element that brings about the dilemma those faced with A.I supported programmes have to put up with.

And, like in other aspects of life, what amounts to being ethical can be tricky; it’s not always just ‘rules and the law’. I’ll use an illustration from sports to buttress my point.

 The US Opens finals  held at Flush Meadows on Saturday, September 8th 2018, indicates that rules alone don’t always determine what is ethical.

Decisions surrounding the outcome of the game provide much food for thought.

Judge Ramos Carson had used a thumbs up signal by Coach Patrick Mouratoglou (Coach of Tennis Star, Serena Williams) to rule the star female tennis player had been ‘coached’ during the game.

Clearly, as stated in rule 30 of ITF.Tennis.com, coaching is defined ‘as communication, advice or instruction of any kind and by any means to a player’.

Furthermore, coaching is defined as: ‘communication of any kind, audible or visible between a player and a coach maybe construed as coaching’.

 The snag here, however, is what are the components of communication.

Any definition of the word will majorly state the two main factors in communication include the sender and the receiver.

In Serena’s case, though the coach agreed he ‘signalled’ and ‘coached’ (and hence, was the sender) Serena however, made it clear she wasn’t looking at her coach; wasn’t the receiver of the signal and wasn’t coached - as she didn’t see the thumbs up signal.

This implies that in the true sense of the word there was no communication between ‘coach’ and ‘player’.

If that’s the case, was it right for the judge to rule that she was coached, when there was no communication between coach and player, nevertheless?

Obviously, during such a fast-paced game, unless a coach shouts out an instruction, the player, most likely, wouldn’t notice much of what’s going on in the stands.

 Williams said of Coach Mouratoglou: "He said he made a motion, I don't understand what he was talking about. We've never had signals."

Judge Ramos focus on what happened in the stands without tying it to the corresponding response on the court and whether it had an impact on play/player at that moment, is what is being called to question here.

 

Ethically, the manner in which the rule was applied here, is also an issue.

The dilemma bordering on the ethics of a decision is a bit like that.

Especially as it relates to Artificial intelligence as many lives and destinies will be dependent on decisions around same.

Public Relations has always been strongly ethically inclined. Whether Judge Ramos decision on Serena was right or not, people must be cognisant that Rules can always be used one way or the other, hence, the ethical nature of how they are applied really does matter.

Why ethics in A.I is so much important is whereas in other instances the human element may always be behind decisions taken - once a machine has been programmed to rule on a decision - the dilemma of whether it’s ethical may no longer be re-visited.

For instance – using the Serena and Tennis court judge example, since coaches focus mainly on the players on court and not what happens in the stands, let’s assume A.I is used to determine the latter as it affects play.

Building on judge Ramos outcome ruling, a machine learning programme could detect signals (such as hand waving, pointing and thumbs up) without necessarily establishing whether they have a direct impact on the game at hand or not; or the signal is received/meant for a player on the court.

If such is used to decide whether a player is given one of possible three warnings that lead to a penalty, as in Serena’s case, one can imagine how so easily A.I could be misleading.

In fact, on realising this, someone in a player’s team could deliberately give a signal at a crucial moment; (in Serena’s case the signal coincided with when she grunted and was trying to make a comeback rally against her opponent, Naomi Osaka); and as it did in this instance, momentarily throw the player off balance, and make the person affected lose concentration.

This could be the dilemma that arises with the ineffective application of A.I.

It, thus emerges that, If A.I is used in a similar manner to goal line technology and programmed based on the decision of the Judge, we would be left with have many, most likely angry, dissatisfied tennis players.

According to an article by The World Economic Forum tagged: This is how we can hold AI accountable’:

 ‘AI is …increasingly making decisions about peoples’ lives’.

The article, written by Nick Easen, goes on to state:

‘This raises many burning ethical issues for businesses, society and politicians, as well as regulators. If machine-learning is increasingly deciding who to dole out mortgages to, tipping off the courts on prosecution cases or assessing the performance of staff and who to recruit, how do we know computerised decisions are fair, reasonable and free from bias?’

It is the study of same that has brought about what is regarded as ‘Robot Ethics’ also known as (Roboethics); being an examination of what is ethical regarding the use of A.I and automation.

In conclusion, I state below a working guide from my experience in applying ethics to PR on what I feel could help:

1)      WILL IT STAND THE PR PANEL TEST: Let’s assume a PR panel is set up to ethically examine every outcome of an A.I programme. Will each outcome pass the test; or would there be wild controversies like in the case of the US Open Final stated previously?

 

2)      IS IT FAIR TO ALL: Would the manner A.I is used in each instance be considered fair? For example, if a hidden camera is placed within a Dyson hot and cool fan and programmed to trigger off and record each time there is movement past it, or a smart television has a similar A.I recording programme installed within it by manufacturers (for whatever reason) would it be fair for eventual owners of these devices, not to know?

 

3)      WOULD IT ENDANGER LIVES: Are human lives safe and protected by the application of A.I programmes?

 

4)      DOES IT GIVE UNFAIR ADVANTAGE TO ONE PARTY: In other words, is another party put at a disadvantage (in many cases, the manufacturer or seller) as a result of the application of the A.I programme).

 

5)      CAN IT BE USED IN A VICE-VERSA MANNER (THE VICE VERSA RULE): Would the user of an A.I programme want it to be applied in the same manner regarding him or her (for instance would the manufacturer of a secret camera, recording movement in a house of the owner of a device, want his or her own life and home monitored secretly in like fashion?

 

There is a need to understand that PR in itself provides guidance as to what is right and what can/ keep a client or applicant on the right path and out of trouble. Many A.I programmes have been pre-prepared before their application/usage by PR Practitioners – even where it comes to them being applied in PR instances.

 Hence, the need the need for the PR Practitioners to where possible ‘cross-borders’ and ensure A.I programmes are PR/A.I complaint - right from the onset.

However, despite the right safety measures in any A.I programme, as researchers Nick Bostrom of the Future of Humanity Instituteand Eliezer Yudkowsky of Machine Intelligence Research Institute  put it in their paper: The Ethics of Artificial Intelligence:

‘The local, specific behaviour of the A.I (programme) may not be predictable apart from its safety, even if the programmers do everything right’… and ‘ethical cognition itself must be taken as a subject matter of engineering.’

Or another statement accredited to the authors: ‘Verifying the safety of the system becomes a greater challenge because we must verify what the system is trying to do, rather than being able to verify the system’s safe behaviour in all operating contexts’.

This suggests that The PR Practitioner, as a matter of necessity, may need to be part of and get involved in the verification and where required - design process - of A.I programmes that will affect PR, to ensure they are ethically complaint.

Exoneration on the grounds that the programme was not designed by and hence didn’t have the input of the PR Consultant will no longer serve as a tenable excuse.

This is the increasing dilemma the PR Practitioner is faced as A.I in PR becomes prevalent.

Is it really worth it?

Considering the positive impact A.I has on PR the answer is in the affirmative!



ABOUT THE AUTHOR

 

Anthony Olabode Ayodele is a Chartered PR Practitioner, a back-to-back IDM CPD Award 2017 & 2018 recipient and a Fellow of the Institute of Information Management (FIIM).

 He is also the Founder of Improving Public Relations for Africans in Diaspora (IPRAID), a body made up of Public Relation Influencers in Diaspora aimed at increasing the influence and membership of BME PR Practitioners in mainstream regulatory PR bodies.

 He holds a BSc in Business Management Travel and Tourism Pathway from Plymouth University. He is also Harvard University (Leaders of Learning) Oxford University (Economics) and World Bank (Risk Managment) Post - Graduate Certified.

 Ayodele additionally has Diplomas in Accountancy,  Public Relations (PG) and Project Management (PG). He is a qualified trainer having completed a Preparing to Teach in the Lifelong Learning Sector (PTLLS)  course obtaining the certificate for training.

He is an elected Board Member (also serving as the General Secretary of it Social Tenant and Engagement Committee) of the Wenlock Barn TMO Board 

He also serves on the of Chartered Institute of Public Relations (CIPR) Education and Skills Committee. 

He operates from and is based in London, UK.



CONTACT: bodeayo67@yahoo.co.ukideasconceptsandproducts@mail.com Mobile: 00447495337259 Landline: 00442072532133

To contact him for training and bookings get in touch via Google Hangout (by clicking), Email or call Tel:

HANGOUT BUTTON

PERSONAL EMAIL: BODEAYO67@YAHOO.CO.UK

EMAIL: IDEASCONCEPTSANDPRODUCTS@MAIL.COM

07495337259

02072532133

 

SelectionFile type iconFile nameDescriptionSizeRevisionTimeUser
Comments