You only need to be browsing the Internet for a few minutes before you come across a wealth of knowledge claims. On the one hand, it fantastic that we have so much information at our disposal. When I was your age and I needed some information for a history project, it took me a lot of time to get what I wanted. I had to physically cycle to the town library, research the index box, find the historical journal I needed, copy the details I needed by hand, and then cycle back home. Now, it takes a few seconds to "Google" the same information.
On the other hand, the very same wealth of information and knowledge that we have at our disposal, can be overwhelming. After all, how do we select the best and most reliable sources? How do we know what we should believe? It can indeed be difficult to distinguish between genuine, well founded knowledge and well presented but unfounded claims.
Reflection: Are there ethical limits to the progress in knowledge acquired through the use of technology?
Learning activity:
Read the article below about Deepfakes. Then discuss in groups: https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them
How has access to technology and multimedia impacted the way in which we can rely on our senses to acquire knowledge?
Filter bubbles:
.A situation in which an Internet user encounters only information and opinions that conform to and reinforce their own beliefs, caused by algorithms that personalize an individual’s online experience.
‘the personalization of the web could gradually isolate individual users into their own filter bubbles’.
The advancements in technology have enabled us to gain access to a much wider range of knowledge. In a way, knowledge has also become more "democratic". Almost anyone can access, and even disseminate, knowledge online. At first sight, this seems wonderful, because we may feel that the quality of knowledge could be improved if a large community is able to share and evaluate it. Nevertheless, technology can also hinder our search for knowledge. If technology allows virtually anyone to express and propagate (what seems to be) knowledge, we are bound to come across some less reliable sources and even unfounded claims. Unfortunately, not everyone assesses those claims when they stumble across them. The current political climate in many countries, for example, shows that large groups of people can easily be swayed by emotionally appealing but essentially false claims. We could argue that mere "belief" or "emotional" appeal has always be a decisive factor in what we tend to accept as knowledge. The popularity of (ancient) religious knowledge claims about the state of the natural world (eg intelligent design), for example, arguably illustrates this. Some people are easily swayed by emotionally appealing claims rather than those that appeal to reason, fact and veracity.
When it comes to politics, we should have a closer look at the concept of "post-truth". Nowadays, the way in which an argument is presented seems to somehow weigh more heavily than the actual content of the argument. Rhetoric is obviously not new. However, the way in which we currently engage with what could be considered to be fact or truth is quite unusual. This "disinterested engagement" with the truth can partly be explained through the way in which information technology shapes cognition. The overwhelming amount of information at our disposal makes it more difficult to determine what is true knowledge. This may lead us to accept emotionally appealing rather than factually correct claims. We find ourselves in contradictory times. On the one hand, we have more knowledge at our disposal than ever before. On the other hand, we don't know what to do with all this knowledge. When we don't know how to distinguish between good and bad knowledge, many of us choose the easiest or most appealing claims over the most accurate ones. And this does not always make us more knowledgeable...
As seen previously, technology seems to enable the democratisation of knowledge. Knowledge is easily accessible. Less educated, non-expert and non-elite groups of people can now find knowledge and information that was previously only available to the few. However, technology can be used to deliberately misinform large groups of people. Technology can be used to gather data about your (online) behaviour. This data can be used by powerful and elitist entities. Companies can use data to advertise products and shape your purchasing behaviours. In some instances, technology has even been used to sway voters' opinion. In his sense, we may wonder whether technology has hampered rather than enabled the equal access to knowledge. How could we use technology responsibly to enable progress within knowledge production, the fair distribution of knowledge and a sustained respect for human rights such as freedom and privacy?
"The social dilemma" explores the role of social media in manipulating our access to knowledge and shaping what we consider to be true.
The Social Dilema 1 https://youtu.be/yGi2YKZZNFg
The Social Dilema 2. https://youtu.be/uaaC57tcci0
As seen previously, technology can both drive and hinder the equal access to knowledge. In addition to ethical questions regarding fair and equal access to knowledge, new technology has given rise to other important moral discussions. Some of these relate to data gathering and the notion of objectivity. Others touch upon individual freedom and the use of algorithms to describe human behaviour. Technology is sometimes used to understand human behaviour or to gather data on human behaviour. On the one hand, this may seem harmless and even useful, because it seems to remove a great deal of human bias. For example, if an AI "calculates", describes and predicts behavioural traits, it may be better at this job than a human being. After all, AI systems are fast and can process a much wider range of data than a human researcher. However, technology may give us the illusion of objectivity. Once the original human input is no longer visible, we tend to forget that it was once there.
Computers may be better and faster at spotting patterns than human beings, but how do they get access to their wide range of data? Many of us are unaware that our personal data is continually being collected when we leave a digital footprint. What kind of data is being collected? What methods are used to get this data? Does this data give the full picture of what you are really like as a human being? All this raises many ethical questions.
In addition, you should remember that the kind of knowledge produced by technology it is not always ethical in terms of nature or purpose. Because this field is so new and different countries may have different legal regulations, unethical knowledge (or knowledge that can be used for unethical purposes) seems to creep in. For example, current face recognition technology proposes that it can say whether someone is gay or straight by looking at someone's photo. Regardless of whether this knowledge is true or not, we should question how the proposed knowledge of such algorithms could be used. Systems that claim they can describe and predict human behaviour through face recognition could potentially lead to discrimination. So how do we ethically define the limits of progress in knowledge that has been created with the help of technology?
The advancements of knowledge through technology come with renewed ethical debates. What are the ethical limits to progress of knowledge created through technology? How much data should information systems be allowed to possess about us? Who possesses knowledge that originates from technology? Sometimes, we have to program machines that are required to take ethical decisions, such as driverless cars. Interestingly, these machines will take decisions without resorting to emotion, as opposed to humans. So, what criteria should we use as foundations for our ethical programming of these machines? Is the absence of emotion an improvement on ethical decisions of machines (as opposed to humans) or not? What do we do when two principles contradict themselves? Is it even possible to come up with a universally satisfactory list of criteria?