AI systems are incredibly powerful. That has at least two implications.
If powerful systems have weaknesses there is the potential for unexpected negative results.
If people with mal-intent use powerful systems, their potential for evil is enhanced. Like everything else, AI can be used for good or for evil. It is inevitable that some people will use this technology in harmful and illegal ways.
The ethical issues obviously require some important "best practices". They will be mentioned on this page. More general best practices will be described on the next page.
AI can be used for data collection and data analysis. Most people now realise that algorithms allow computers, and the companies collecting that data, to increasingly know a lot about them and use that data to, at best, serve them and, at worst, manipulate them.
Churches might justifiably collect data (about members contact details, attendance, giving, spiritual gifts, etc. or about the wider community) and analyse that data in various ways to best serve those people. However, it is important that the collection and use is justifiable , ethical and legal.
Generally, legislation will prescribe the appropriate collection, storage and use of personal data. In New Zealand, that includes the Privacy Act. Every effort must be made the ensure the provisions of the Act are observed. That includes:
telling people what data is stored and how it is used
allowing people to see what data is stored and request corrections
A particular issue related to artificial intelligence concerns personal data being inputted into an AI system (e.g. for analysis). If that system is on-site and secure, there might be no problem. If it is off-site (e.g. in the cloud) there may be no way of knowing where that data will end up. Extreme caution is required if personal information might inadvertently be given away.
It is the responsibility of churches and individual users to ensure that robust data security systems are implemented, including:
prioritising the protection of personal and sensitive information collected through AI systems
encryption,
controls over who can access data
regular audits
not entering individuals' private information into online systems.
AI systems are only as good as the data they have been trained on, and only as good as the people behind them.
AI systems are limited. They might know a great deal but there is also a great deal that they do not know, or understand. They are machines. Their outputs should not be accepted uncritically. Like most information on the internet, it requires a degree of scepticism and needs to be verified. The systems can appear so extraordinarily impressive that the temptation is to assume they must surely be trustworthy. That would be an unwise assumption.
The dataset used to train an AI system might contain biases. Equally, the people doing the training might have certain biases (almost certainly will). Users need to be aware of that possibility and alert to biases that already exist and are being perpetuated by the system.
A classic example is that AI art generators were likely to depict Caucasians rather than people of other races. Why? Probably because their training data contained Caucasians mostly and led them to believe that that is what the world looks like. Since realising that, most companies behind AI art generators have sought to correct that bias.
But the biases might include:
biased information about divisive social issues
worldly attitudes and views not balanced by an understanding of Christian teaching.
Again, those biases could be unintentional or intentional. Those behind the systems could deliberately introduce biases reflecting their own views or to further their own ambitions.
Generative AI systems such as chatbots do not understand what they are talking about! They know how to talk about various subjects but they do not understand those subjects. They simply parrot what they have "heard" other people saying, even if they phrase it slightly differently. Having seen massive amounts of text, a chatbot works out what word is most likely to follow the words it has just used. It adds words to sentences based on probability. It is therefore possible for a chatbot to construct a sentence that sounds right but it has simply made up. Chatbots can speak with great certainty and be quite wrong. When they do make up incorrect information they are said to be hallucinating.
Deepfakes are extremely realistic-looking counterfeit media, such as videos, images, and audio recordings, created using artificial intelligence techniques. Deep fakes have raised concerns about their potential for malicious use, such as to spread disinformation (for example, images showing an incident that never happened) or harm someone's reputation (for example, showing people in immoral situations or apparently showing them saying things they never said).
There is concern about how this might influence elections.
Check the accuracy of all AI outputs
Churches need to be aware of the potential for error and doubly aware of the need to test the information and be discerning. The information provided by AI might be helpful in understanding different points of view but might also be very biased. Healthy scepticism and fact-checking using other sources of information are essential.
Always be sceptical of what AI has produced.
Be aware of potential biases and correct for them. For example:
Ask for people of particular, or various, ethnicities in images.
Assess all AI produced materials theologically and biblically.
You might even want to check for denominational differences if you do not want to confuse your users.
Bring the same high ethical standards to the use of AI as you would to anything else.
Never use AI to hurt another person or group of people.
Never create anything that makes another person look bad - especially anything fake.
Transparency requires that we tell people when we are using AI. This is particularly true at a time when there is limited understanding of AI, on the one hand, and great suspicion, on the other. People might be alarmed to find out that AI has been used and they haven't been told. They might suspect that something secretive and threatening is going on. Trust will be more easily maintained if we are open about what we are doing and why.
Using AI produced text, graphics, music, videos etc. as if they were your own work is not really plagiarism in that it is not co-opting another person's work. The output from the AI is original and you could claim that you did the work by developing and refining the prompts that produced it. The AI was simply a tool analogous perhaps to a camera. Prior to the invention of cameras portraits had to be painted or sketched. Using a camera took all of the skill out of it (until photography became an art form of its own.)
On the other hand, there is something dodgy about presenting a product of AI as if it was your own.
For that reason, it is considered best practice to acknowledge the use of AI. Attribution should include naming the AI system that was used.
This issue becomes murkier when AI has been used for ideas or inspiration only or when it has contributed only partly to the finished product. If I get some inspiration from an AI-written sermon but then write a sermon myself incorporating some of the ideas, do I need to acknowledge the background role AI played? If I asked for Bible study questions on a particular passage and then use one or two of those amongst other questions of my own, do I need to say that?
A separate but related ethical issue
Because AI systems have learnt from other people's data or examples, it is arguable that it is inherently plagiarism. Artists, for example, whose works have been used in the training of the AI (generally without their permission) can feel that those works are being "copied" and they receive no royalties and no acknowledgment. Is the whole enterprise inherently unethical?
These are complex issues awaiting the results of court cases and legislation. In the meantime, Christians should be aware of the complexity of some of these issues and should be committed to the ethical use of AI.
Be transparent about the use of AI.
Tell people when AI is being used and for what it is used.
Be willing to explain how it is used in the hope that the fear can be lessened.
Acknowledge when AI has been used to generate images, text, music, video etc.
It could be argued that this is not necessary, and it might depend on how, and to what extent, the AI has been used. Nevertheless, it is considered best practice to err on the side of caution (and, indeed, honesty) so as to avoid suspicion. We might ask questions such as:
Am I giving the impression that this is my work when it really is not?
If I was to quote another person, I should acknowledge that. Is using wording from an AI system any different?
Machines are not humans. Christian ministry requires a relationship with God and a human "soul".
Many people will want to communicate with a real person, not a machine.
People are increasingly developing relationships with AI personalities (i.e. with computers) and not with other humans. This is a danger.
People still need human contact, maybe even more so in an age when they will interface more with machines. Churches need to strike a balance between utilising AI technologies and ensuring the personal interactions and relationships remain central to their ministry.
AI systems can do many jobs that people have done. As they become more sophisticated, that will increase.
Some would argue that some jobs will disappear but other job possibilities will emerge. That has happened with other technological advances.
Is that true? Or might we find that this is genuinely disruptive and unemployment will increase? How can we know what will happen? Are we just speculating?
Churches must remain sensitive to this possibility. Jobs within the church might be lost as AI is employed. Is the church okay with that?
But, equally, members of the church will be in danger of losing their job out in the wider world. This becomes a pastoral problem as well as an ethical problem. Do measures need to be taken to limit the loss of jobs? Are we ready to care for those who lose their jobs, including perhaps greater sharing of our own resources?
Carefully assess the implications of AI implementation on staff roles and responsibilities, ensuring that human workers are not maginalised or replaced unnecessarily.
Disciple-making aims to help Christians exercise their ministries, using their God-given gifts. Judicious use of AI is sensible but do not lose sight of people serving.
The growth of the internet and accessibility to computers has created a "digital divide", a gap between those who have ready access to the technology and those who do not. AI will put immense power into the hands of those who know how to take advantage of it, giving them opportunities and influence that others will lack.
Do not use AI in ways that disadvantage those without access to it or the knowledge to use it.
Provide opportunities (access, support and training) for those who would otherwise miss out.
AI systems use a huge amount to computing power. They have a large carbon footprint. While they will provide great efficiencies, that does come at a cost.
It is also necessary to evaluate the financial cost of implementing and maintaining AI systems within an organisation.
There are other ethical issues that do not directly related to the use of AI by Christians or in churches, such as:
e.g. the potential for unmanned (but very intelligent and lethal) "killing machines".
Will these machines become more intelligent than us and take over the world - maybe even eliminate humans?
We could take any aspect of Christian belief and consider the implications of AI. We will not do that in this training but some questions will be addressed in other areas of this site. And you might want to do your own reflection.
What, for example, do the following say about AI? You could add other issues and different questions.
The dignity and worth of people
How might AI diminish the value placed on people?
How might AI enhance the value placed on people?
The sinfulness of people
Are there dangers in trusting people to develop and use AI systems?
Care of God's creation
Research, for example, the carbon footprint of all the computers running AI.
Truth
How might AI help or hinder people knowing the truth?
The end of the world
Could AI lead to the end of the world or the elimination of humanity?
The value of work
How do we care for people whose jobs are threatened by increasing use of AI?
Hope
Where is our hope? Will people look to AI to save the world?
Creativity
Does AI decrease or increase the opportunities for human creativity?
Love
AI systems might mimic love but cannot actually love. What are the implications of that?
How might AI be used to show greater love for people?
Given all of these issues, we might ask if Christians should even use AI. On the other hand, should church employees be expected to create all content themselves, without using AI? If the tools can be used for good, should Christians take maximum advantage of them?
In a church, AI should always be used in ways that support the church's mission and values. That means using AI to enhance worship, foster community, and support the church's serving ministries and outreach. There is always the potential that it could do the opposite and we must be alert to that.