8: Trends in Information Systems
Chapter Learning Outcomes:
1. Define Artificial Intelligence and the history behind AI.
2. Compare machines, humans, and AI.
3. Distinguish between the types of AI and learning methods.
4. Differentiate between the types of extended reality.
Chapter 8: Emerging Technology
8.1 Introduction
Information systems have evolved at a rapid pace ever since their introduction in the 1950s. Today devices that you can hold in one hand are more powerful than the computers used to land a man on the moon in 1969. The last 10 years has seen the proliferation of intelligent devices, devices that can process information and make suggestions or provide responses. How does Netflix seem to know what types of programs or movies we like? How does Amazon post product displays to our account that sparks our interest? How does YouTube seem to provide video feeds that align with what we have previously watched and protect younger viewers from harmful content? This is achieved through artificial intelligence using algorithms and machine learning.
Artificial intelligence, the ability of a computer or machine to think and learn, and mimic human behaviour is changing the way companies do business. AI uses software algorithms to simulate human intelligence processes such as reasoning and speaking within computers and other IoT devices. AI simulates human intelligence processes through robotics, intelligent agents, expert systems, algorithms, and natural-language processing.
This chapter will provide an introduction to artificial intelligence and provide some examples of how businesses are leveraging this technology. It will also explore other types of emerging technology. Emerging technology includes new technology and technology that is continuously evolving.
8.2 AI Evolution
To understand where the development of intelligent systems is heading, it is important to explore it’s evolution. One of the first articles discussing the possibility of intelligent machines dates back to 1950, when Claude Shannon published Programming a Computer for Playing Chess which discuss the development of a chess playing program.[1]
Around the same time, Alan Turing, a young British mathematician, explored the possibility of artificial intelligence in his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.
The Turing Test: Can Machines Think?
The Turing Test (referred to as the Imitation Game) attempts to differentiate humans and machines. In the test, a human judge asks a human and a machine questions. If the judge cannot reliably tell the difference between the machine and the human (imitating human behavior), the machine is said to have passed the test, and therefore have the ability to think. There have been criticisms of the test, and Turing responded to some. He stated that he did not intend for the test to measure the presence of “consciousness” or “understanding”, as he did not believe this was relevant to the issues that he was addressing. The test is still referred to today.
The first official usage of the term “AI” was in 1956, at which point AI systems were used mainly to solve simple mathematical problems that were too tedious for humans. From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively.
In the 1980’s expert systems, which mimicked the decision making process of a human expert were introduced. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP).
In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.
In 2011, IBM’s Watson competed against two “Jeopardy!” winners, Ken Jennings and Brad Rutter, and emerged victorious. In 2014, a Chatbot named Eugene Goostman becomes the first computer that passed the Turing Test developed by Alan Turing himself. In 2017, Google’s Alpha Go was able to defeat Chinese Go champion Ke Jie.
See some of these major events in the evolution of AI in the timeline below.
https://ecampusontario.pressbooks.pub/informationsystemscdn/chapter/13-2-can-machines-think-n/
[Note: AI timeline from pressbooks 13.2 has embedding options for LMS]
Section Attributions
Reynoso, R. (2021, May 25). A complete history of artificial intelligence. G2.https://www.g2.com/articles/history-of-artificial-intelligence ↵
OER (2/5):Information Systems for Business and Beyond Copyright © 2022 by Shauna Roch; James Fowler; Barbara Smith; and David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
8.3 Machines, AI & Humans
Current development of intelligent machines is focused on mimicing human behavior to perform complex tasks. Applications of AI are proliferating, and being integrated into our every day lives. Explore the examples in the activity below. Which of these examples of everyday technology incorporate AI?
8.3.1 Machines vs AI
Machines and AI systems may appear to be the same, but their systems and mode of operation are vastly different.
Machines that incorporate AI are typically capable of analyzing data, and they appear to “make decisions”, similar to humans.
Regular machines simply execute a series of commands and are unable to improve based on the data they receive.
Example: Machine vs AI
A great example of the difference between machines and AI systems is the platform Grammarly versus your typical spell check. While a spelling checker is able to correct your spelling, grammar, and punctuation, Grammarly can do all that, as well as check for issues in sentence structure, misused words, and more through the power of AI.
A key distinction between a computer following the code vs an AI system executing this is that with AI, the algorithm would become smarter as more data is entered and the number of errors would decrease over time. AI models are created through algorithms, which are a set of rules or processes to solve a specific problem or task.[1]
Example: Algorithm
Imagine you were given some numbers and were tasked with developing an algorithm that sorts the numbers from lowest to highest. What would be the set of steps that you would need to follow to accomplish this task?
One solution would probably be to first define the range of numbers and set some criteria for the computer to follow. Next, write a specific set of instructions on how you would like to order the numbers. For example, you may want the computer to sort the numbers lowest to highest. If there are several datasets, sometimes a loop, which is a part of code that allows a command to run again and again would be used to ensure the computer completes the command over and over until all datasets are sorted. The algorithm would then execute the commands one by one and any errors would be attributed to a mistake in the code.
8.3.2 AI vs Humans
The distinction between AI and simple machines has been made, but how is AI different from humans? AI and Humans are different in a number of ways. Apart from humans being biological, live organisms, a key difference is the type and level of intelligence we possess.[2] We are able to make decisions based on our intelligence, emotions, self-awareness, and creativity, whereas most machines simply perform tasks based on code written by programmers. The main goal behind the advancement of machines and computers is to make these systems more efficient and “smarter” to support humans, which is now being accomplished through AI.
However, tasks that involve emotional intelligence or intuition cannot be automated. A machine will make decisions based on facts and statistical data but cannot, for example, recognize emotions, and thus would not be able to make rational decisions. AI lacks the “Human Factor”. The table below compares natural (human) and artificial intelligence for a range of different abilities on a scale of achievement (low to high).
Attributions
de Ponteves, H. (2019). AI Crash Course: A fun and hands-on introduction to machine learning, reinforcement learning, deep learning, and artificial intelligence with Python. Birmingham: Packt Publishing Ltd. ↵
Vadapalli, P. (2020, September 15). AI vs Human Intelligence: Difference Between AI & Human Intelligence. upGrad blog. https://www.upgrad.com/blog/ai-vs-human-intelligence/. ↵
Stair, R. & Reynolds, G. (2017). Principles of Information Systems. Cengage. ↵
8.4 Types of AI
AI systems are different from humans and machines, and the degree of variation helps in classifying the different types of AI. AI can be classified based on ability, and functionality. There are three categories of AI based on ability: narrow AI, general AI and super AI.
8.4.1 Types of AI: Based on Ability
Despite having developed complex machines and applications such as Siri, Alexa, or other virtual assistants, all AI systems that currently exist are classified as A.N.I. According to a general consensus among researchers, A.G.I and A.S.I are still decades away.[1]
Example: Chatbots
Chat-bots and/or virtual assistants (i.e. Siri, and Alexa) are computer programs that use AI and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.[2] These bots are cloud-based and able to gather information, such as user preferences or browsing history, to provide solutions that are specific to each individual. Chat-bots improve the individual customer experience as the Cloud is able to provide personalized and relevant information.
In the hospitality industry, Best Western in collaboration with IBM Watson Advertising managed to use AI and NLP to create a more personalized customer experience for those looking to book Best Western hotels for their holidays. Specifically, “Conversations”, an AI-powered advertising model from IBM, was used to provide one-on-one connections and recommend options based on the user’s intent.[3] For example, Conversations was able to provide users with ads containing information on their destination, and other tips and tricks!
8.4.2 Types of AI: Based on Functionality
AI can also be categorized based on functionality: reactive machines, limited memory, theory of mind, and self-awareness.
Categories of Artificial Intelligence Based on functionality include reactive machines, limited memory, theory of mind and self awareness
Attributions
Dilmegani, C. (2021, Aug 11). Will AI REACH singularity By 2060? 995 experts' opinions on AGI,” AIMultiple, 19-Jun-2021. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/ ↵
Ibm.com. (n.d.) What is a chatbot? https://www.ibm.com/topics/chatbots ↵
Ibm.com. (2021). IBM Watson Advertising Conversations - Overview. https://www.ibm.com/products/watson-advertising-conversations ↵
8.5 Machine Learning and Deep Learning
Now that you understand how AI systems differ from humans and machines, and their evolutionary arc, the next question is how do they work? Intelligent systems are created through machine learning and deep Learning techniques. These techniques are used to train, test, and validate AI models that can be implemented into different systems in an effort to automate simple tasks, process massive amounts of data and even develop new products. Let’s look at the relationship between artificial Intelligence, machine Learning, and deep Learning, as illustrated in the image below. It is important to understand the relationship between these terms as they are similar and can be confusing.
8.5.1 Machine Learning
Machine learning is a technique that is used by an AI system to analyze data, find patterns and make decisions automatically or with minimal human support. Machine Learning enables a system to sort, organize, and analyze data in order to draw important conclusions and make predictions. Machine learning techniques can be subdivided into three categories: supervised learning, unsupervised learning, and reinforcement learning.
Supervised learning is typically used to create algorithms where the model must learn how the inputs affect the outputs, therefore these models are used in instances like predicting the prices of houses, image classification, and even weather prediction.
AI systems created through unsupervised learning algorithms are given unlabelled data and like the human brain, the model is expected to connect information together, sort of like detecting patterns within data sets. The data given to the machine is unorganized, and the machine does the work of drawing inferences and conclusions from existing data points to sort it out into groups or clusters.
Example: Supervised and Unsupervised Learning
If a company wants to create a new marketing campaign for a particular product line, they may look at data from past marketing campaigns to see which of their consumers responded most favorably. Once the analysis is done, a machine learning model is created that can be used to identify these new customers. It is called “supervised” learning because we are directing (supervising) the analysis towards a result (in our example: consumers who respond favorably).
If a retailer wants to understand purchasing patterns of its customers, an unsupervised learning model can be developed to find out which products are most often purchased together or how to group their customers by purchase history. Is it called “unsupervised” learning because no specific outcome is expected.
Machine learning models can form the core of logistics and supply chain solutions in terms of optimizing the product packet size, delivery vehicle selection, delivery route selection, and delivery time computation. For instance DHL uses Amazon’s Kiva robotics (improve speed, accuracy) for the network management.
Reinforcement learning is similar to how humans learn: they learn from mistakes. Systems created using reinforcement learning become smarter over time as more data is collected, and the machine learns from any mistakes it makes. In this type of algorithm, there is no supervision involved as it is based on a reward or goal-driven system where the machine is rewarded (assigned positive values) for desired behaviors and punished (assigned negative values) for undesired ones. Therefore, this type of algorithm is optimal to be used in instances where a very specific behavior needs to be automatically determined.
Example: Reinforcement Learning
Fanuc, a Japanese industry-based robotics company, has been leading with their innovation in this field – they are working actively to develop reinforcement learning in their own robots. They use reinforcement training so that the robots can train themselves on how to pick an object from one box and place it into another box. They use this process for different tasks, and as a result, they can build robots that can complete complex tasks way quicker than humans. [2]
8.5.2 Deep Learning
Deep Learning is a subfield of Machine Learning and is another technique in which a machine is trained with massive datasets, however, there is significantly less human work involved. In Deep Learning, the model is able to extract features and classify the data itself instead of having a human identify the features. We can view Deep Learning as an intelligent technique that uses multiple layers of a computing system known as Artificial Neural Networks to categorize information within a data set. Much like the neural pathways in a brain, scientists and engineers created a web of connected paths for electrical messages, which are called Artificial Neural Networks (A.N.N). These neural networks function similarly to the neurons that fire electrical impulses within the brain, which then send messages to and from the brain in living organisms to accomplish tasks. The data in A.N.Ns are activated and sent to other layers through a series of mathematical calculations to simulate decision-making in computers.
Neural Networks
Algorithms that process information in a similar way to the human brain. Contain a collection of interconnected nodes.
Deep learning and neural networks have been deployed in several fields, such as computer vision, natural language processing, and speech recognition. It has been used in many healthcare applications for the diagnosis and treatment of many chronic diseases. These algorithms have the power to avoid outbreaks of illness, recognize and diagnose illnesses, and minimize running expenses for hospital management and patients. While machine learning, deep learning, and neural networks are all subsets of artificial intelligence; deep learning is actually a subset of machine learning, and neural networks are a subset of deep learning.
Deep learning can be thought of as the automation of predictive analytics. Deep learning is essentially a neural network with three or more layers which allows it to learn from a large amount of data. Deep learning is behind many artificial intelligence applications improving on automation and analytics. It is used in such applications as voice activated electronics, self-driving cars, and credit card fraud detection. The evolution of deep learning started after the invention of neural networks, by adding more neurons and additional hidden layers to neural networks makes deep learning more cultivated. Like machine learning, deep learning is also categorized into subcategories, i.e., supervised, and unsupervised.
8.5.3 Machine Learning vs Deep Learning
Machine learning focuses on developing algorithms that can alter themselves without human involvement to take defined data and generate a required output. Deep learning uses neural networks to learn unsupervised from unstructured or unlabeled data.
Machine learning uses algorithms to analyze data, learn from it, and make smart decisions based on the knowledge learned, while deep learning organizes the algorithms into layers to form artificial neural networks that can learn and make intelligent decisions independently.
Attributions
IBM Cloud Education. (202,0 June 3). What is Artificial Intelligence (AI)?. IBM, https://www.ibm.com/cloud/learn/what-is-artificial-intelligence ↵
Sharma, P. (2020).8 Real-World Applications of Reinforcement Learning. MLK - Machine Learning Knowledge. https://machinelearningknowledge.ai/8-real-world-applications-of-reinforcement-learning/ ↵
Jelvix. (2021). Difference between AI vs Machine Learning vs Deep Learning.https://jelvix.com/blog/ai-vs-machine-learning-vs-deep-learning ↵
8.6 Applications of AI
8.6.1 Autonomous Technology
One of the most widely used applications of AI is autonomous technologies. By combining software, sensors, and location technologies, devices that can operate themselves to perform specific functions are being developed. Some examples include: medical nanotechnology robots (nanobots), self-driving cars, or unmanned aerial vehicles (UAVs).
A nanobot is a robot whose components are on the scale of about a nanometer, which is one-billionth of a meter. While still an emerging field, it is showing promise for applications in the medical field. For example, a set of nanobots could be introduced into the human body to combat cancer or a specific disease.
A UAV, often referred to as a “drone,” is a small airplane or helicopter that can fly without a pilot. Instead of a pilot, they are either run autonomously by computers in the vehicle or operated by a person using a remote control. While most drones today are used for military or civil applications, there is a growing market for personal drones.
8.6.2 Robots
Robots are automated machines that can execute specific tasks with very little or no human intervention and are able to accomplish tasks with both speed and precision [1]. The development and deployment of robots is most common in manufacturing in the replacement of humans in repetitive tasks. Robots are also used in medicine, education, restaurants and hotels, and entertainment. Some of the most popular robots are ASIMO by Honda and Boston Dynamics robots like ATLAS. Robots can increase productivity and accuracy, but are costly. For more advantages and disadvantages, see the table below.
With artificial intelligence, robots will be able to independently estimate the events around them and to make decisions on actions which they’re required to make for their given objective. Robots can already both record human movement skills and replicate them as machine learning improves drive efficiency and mobility. [2]
8.6.3 Intelligent Agents
Intelligent agents process the inputs it receives, and makes decisions/ takes action based on that information. Sensors allow intelligence agents to perceive their surroundings, while actuators allow them to act on that perception. A software agent, a human agent, and a robotic agent are all examples of agents, each with its own set of sensors and actuators to sense its environment and then perform actions through its actuators.
AI agents that can learn from their previous experiences or have learning capabilities are known as learning agents. Beginning by acting on basic knowledge, learning agents subsequently learn to act and adapt on their own automatically. Conversational intelligence tools are tools that have the ability to auto-record meetings, transcribe, and apply AI to speech. They can be helpful for individuals working in sales, and could possibly replace some functions performed by customer relationship management systems.
Siri, Alexa and Google Assistant are examples of intelligent agents that can be accessed from smartphones. These tools can allow users to open applications, send messages, make calls, play voicemails and check the weather.
8.6.4 Natural Language Processing (NLP)
Natural Language Processing (NLP) allows computers to extract meaning from human language. Natural Language Processing’s goal by design is to read, decipher, and comprehend human language. In Natural Language Processing, humans converse with the machine which records the conversation and converts the audio to text. After that, the system analyses the data and converts it to audio. The machine then plays the audio file back in response to the human.
Algorithms use natural language processing to detect natural language rules, resulting in the conversion of unstructured language input into a format that computers can recognize. Natural Language Processing (NLP) is used in many translation applications such as Google Translate, various interactive voice response (IVR) applications found in call centres, and software tools that check for grammatical accuracy of texts, like Microsoft Word and Grammarly. Additionally, it is also used in personal assistant tools like Siri, Cortana and Alexa.
8.6.5 Expert Systems
Expert systems (ES) are designed to emulate the human ability to make decisions in specific contexts. An expert system, by means of a reasoning, can perform at a level comparable to or better than a human expert does within a specified area. The goal of ES is therefore to solve complex problems by following a line of reasoning that is derived from human knowledge. This reasoning is normally represented by if–then–else statements, instead of conventional procedural code. Expert systems have been reliably used in the business world to gain competitive advantages and forecast market conditions. In a time where every decision made in the business world is critical for success; the assistance provided from an expert system can be an essential and highly reliable resource.
Expert Systems in Fraud Detection
Expert systems can be used in fraud detection. A company can establish rules for detecting fraud and then allow the system to identify fraudulent behaviour based on the set of rules created. For example, a rule can be created to look for transactions that are originating from a particular location (one that may been known for criminal activity) and are over a certain amount. Since expert systems are based on rules that are programmed, it may be easy for cybercriminals to circumvent the system. However, expert systems can still be useful as part of a larger fraud defense strategy that may also incorporate more intelligent systems [3]
Attributions
TechTarget. (2021, May 14). What are robots and how do they work? Retrieved January 12, 2022, from https://www.techtarget.com/searchenterpriseai/definition/robot ↵
Evgeniy Bryndin, Robots with Artificial Intelligence and Spectroscopic Sight in Hi-Tech Labor Market, International Journal of Systems Science and Applied Mathematics. Vol. 4, No. 3, 2019, pp. 31-37. doi: 10.11648/j.ijssam.20190403.11 ↵
Lu, C. (2017, February 19). How AI is helping detect fraud and fight criminals. Retrieved December 3, 2021, from https://venturebeat.com/2017/02/18/how-ai-is-helping-detect-fraud-and-fight-criminals/. ↵
8.7 Extended Reality
Another category of emerging technology is extended reality or XR. XR is an umbrella term that covers all forms and combinations of real and virtual environments. This includes: augmented reality (AR), virtual reality (VR) and a combination of the two or mixed reality (MR). [1]
8.7.1 Augmented Reality
Augmented reality (AR) enhances one’s view of the real world with layers of digital information added to it. With AR there is no created scenario; instead, an actual event is being altered in real time.[2] Some examples of this are Snapchat lenses and the game Pokémon Go. AR is being used in e-commerce to help purchasers visualize and interact with the products before purchasing them.
IKEA Augmented Reality Game
Escape the Clutter is a an AR escape room game for Snapchat developed by IKEA. In the game, a cluttered 3D room will appear on screen. The object of the game is to remove the clutter by adding in IKEA organization solutions. The organization products act as the ‘keys’ to the escape room. As users add the products they can learn more about them and their benefits. [3]
8.7.2 Virtual Reality
Virtual reality (VR) is a computer interaction in which a real or imagined environment is simulated. This allows users to both interact with and alter that reality within the environment. The popularity and development of virtual and augmented reality has grown due to advances in VR technology and devices based on smartphones like Google Cardboard. Some people view virtual reality as a gimmick to enhance video game playing at home, but the technology is being used in innovative ways.
One way in which businesses are leveraging VR technology is for training and education. This technology is especially valuable in high risk industries like the military, space exploration, and medicine where one wrong move can have disastrous consequences. As well, it can be helpful to simulate interview scenarios, or difficult conversations allowing users to role play and practice in varied scenarios. VR can also simulate in-person meetings for those working remotely through the use of avatars. Avatars are computer representation of people.
Attributions
Likens, S. (2019, April 8). The XR factor: The incredible potential of extended reality. https://www.pwc.com.au/digitalpulse/extended-reality-xr-essentials-101.html. ↵
Moawad, G.N., Elkhalil, J., Klebanoff, J.S., Rahman, S., Habib, N., Alkatout, I. Augmented Realities, Artificial Intelligence, and Machine Learning: Clinical Implications and How Technology Is Shaping the Future of Medicine. J Clin Med. 2020 Nov 25;9(12):3811. doi:10.3390/jcm9123811. PMID: 33255705; PMCID: PMC7761251. ↵
Ikea.(2022). It's Time for Some Good, Cleaning Fun. https://www.ikea.com/us/en/campaigns/escape-the-clutter-pub7490bf20 ↵
8.8 Emerging Technology Trends
In addition to AI; a number of other emerging technologies are forecast to have an impact on how businesses operate and communicate. The following are some trends that are driving these new and innovative technologies.
8.8.1 Wearable
The last section explored VR technologies, which include devices that can be worn to simulate virtual environments. Technology that can be worn or ‘wearables‘ have been around for a long time, with applications such as hearing aids and, later, bluetooth earpieces. Product lines have expanded to include the Smartwatch, body cameras, sports watch, and various fitness monitors and other wellness devices. Energy harvesting and haptic devices are examples of future developments in wearable technology. Energy harvesting allows body heat to be converted to solar power, and haptic devices allow one to control virtual objects. Wearable haptic devices will be integrated into clothing to help with directions, or to assist individuals navigate the virtual world [1]
8.8.2 Connected
As discussed in an earlier chapter, the Internet of Things (IoT) refers to devices that have been embedded into a variety of objects including appliances, lamps, vehicles, lightbulbs, toys, thermostats, jet engines, etc. and then connecting them via Wi-Fi, BlueTooth, or LTE to the internet. Think of IoT as devices that you wouldn’t normally consider being connected to the internet, and the connection is independent of human intervention. This interconnectedness or uploading of data is virtually automatic. So a PC is not an IoT, but a fitness band could be. Interconnected devices are becoming ubiquitous, meaning they are everywhere. Today there are IoTs for monitoring traffic, air quality, soil moisture, bridge conditions, consumer electronics, autonomous vehicles, and the list seemingly never stops.
Principally three factors have come together to give us IoT: inexpensive processors, wireless connectivity, and a new standard for addresses on the internet known as IPv6. Processors have become both smaller and cheaper in recent years, leading to their being embedded in more devices. Consider technological advancements in your vehicles. Your car can now collect data about how fast you drive, where you go, radio stations you listen to, and your driving performance such as acceleration and braking. Insurance companies are offering discounts for the right to monitor your driving behavior. On the positive side, imagine the benefit of being informed instantly of anticipated traffic delays each time you adjust your route to work in the morning. Benefits from IoTs are virtually everywhere. Here is a quick list.
While there are many benefits to these constantly connected devices privacy and security of personal data and information should also be considered.
8.8.3 Collaborative
As more people use smartphones and wearables, it will be simpler than ever to share data with each other for mutual benefit. Some of this sharing can be done passively, such as reporting your location in order to update traffic statistics. Other data can be reported actively, such as adding your rating of a restaurant to a review site. The smartphone app Waze is a community-based tool that keeps track of the route you are traveling and how fast you are making your way to your destination. In return for providing your data, you can benefit from the data being sent from all of the other users of the app. Waze directs you around traffic and accidents based upon real-time reports from other users.
8.8.4 Super
Quantum computing, a technology that applies quantum mechanics to build novel supercomputers, high performance computers used to solve large scale computational tasks. Quantum computers can operate at speeds that are exponentially faster than common computers and can make calculations based on the probability of an object’s state before it is measured. Google demonstrated that their quantum computers can solve a problem that no classical computer could ever solve. For the third year in a row, IBM managed to double its quantum computing power. Additionally, several web service providers, including Amazon, announced plans for cloud-based quantum computing services. Quantum computers can make drug development, power storage, manufacturing, and agriculture better, faster, and more sustainable. They may also unravel cybersecurity infrastructure around the world making them a potential threat to national security. The field of quantum computing is still in its infancy, and the question of scale remains unsolved.
Attributions
Perkoic, M. (2022, September 29). How Smart Wearables Are Shaping Our Future. Forbes.https://www.forbes.com/sites/forbesbusinesscouncil/2022/09/29/how-smart-wearables-are-shaping-our-future/?sh=53a8fdde6b24 ↵
8.9 The Future: A Cautionary Approach
As outlined in this chapter, applications of AI and other emerging technology are increasing rapidly. However, the advancement of these technologies also raises questions with respect to governance and regulations to its development. Concerns over ethical and privacy issues along with military development are among some issues to consider. Remember from a previous chapter, ethics means a set of moral principles. There are several events that reiterate just how easy it is to exploit AI for unethical uses.
8.9.1 Example: AI and Privacy
Examples like the breaking of privacy laws by Google’s DeepMind and the failure to protect Facebook users’ personal data in the Facebook – Cambridge Analytica scandal generated a lot of concern about the use of AI. In both of these examples, the data stored by the AI in DeepMind and Facebook was released to third-party users without the patients’ (DeepMind) or Facebook users’ knowledge or consent. [1]
8.9.1 Example: ChatGPT and Ethical Implications
ChatGPT is a conversational chatbot that was launched by OpenAI in November 2022. It can be used to produce written work as well as write code by responding to prompts or instructions. Following it’s release, news articles proliferated criticizing the tool over the ethical concerns of its use. Some concerns shared include the increased risk of misinformation, the ability to perpetuate bias, and its ability to impersonate individuals. [2]Academic institutions across the world expressed their concerns around the use of the tool for cheating. Some schools have even banned its use. [3]
However, despite the setbacks, AI appears to be an exponentially growing field. Today, because of the Internet of Things, data volumes have significantly increased since almost every device we use has the ability to gather and store data such as pedometers on our phones, thermostats, and voice assistants like Alexa or Siri. To keep up with these large amounts of data, AI has been upgraded to increase processing speeds and computational capacity. Nowadays, AI systems are much more efficient, widely available, and are easily accessible in contrast to their earlier developmental stages. With the rapid growth in the development of AI, it is important to consider the ethical implications of its use.
Attributions
Sinha, D.(2022). Top 5 Most Controversial Scandals in AI and Big Data.Analyticsinsight.net. https://www.analyticsinsight.net/top-5-most-controversial-scandals-in-ai-and-big-data/. ↵
Steinbeck, A. (2023, Jan 8). ChatGPT explores its own ethical implications. Medium. https://medium.com/illumination/chatgpt-explores-its-own-ethical-implications-17a56b913b06 ↵
Wood, P. & Kelly, M.L. (2023, January 26). 'Everybody is cheating': why this teacher has adopted an open ChatGPT policy. NPR. https://medium.com/illumination/chatgpt-explores-its-own-ethical-implications-17a56b913b06 ↵
End of Chapter Summary
· It traces the rapid development of information systems since the 1950s, comparing modern handheld devices to early computers.
· The narrative unfolds the power of AI in processing information through algorithms and machine learning, showcasing its ability to mimic human behavior and impact businesses.
· The chapter covers significant achievements such as IBM's Deep Blue defeating a chess champion in 1997, speech recognition software, and recent successes like IBM's Watson winning "Jeopardy!" in 2011 and Google's AlphaGo defeating a Go champion in 2017.
· The chapter then delves into the intricate workings of AI systems, focusing on their development through machine learning and deep learning techniques.
· Deep learning, identified as a subset of machine learning, is presented as a technique requiring significantly less human involvement, utilizing artificial neural networks to independently extract features and classify data.
· The narrative extends to applications of AI, exploring autonomous technologies like medical nanobots, self-driving cars, and unmanned aerial vehicles (UAVs). The use of robots in manufacturing, medicine, education, and entertainment is discussed, emphasizing their potential impact on efficiency and mobility.
· The chapter concludes with an exploration of extended reality (XR) and emerging technology trends. XR, covering augmented reality (AR), virtual reality (VR), and mixed reality (MR), is examined for its applications in e-commerce, gaming, and training.
· Wearables, the Internet of Things (IoT), collaborative technologies, and supercomputing, including quantum computing, are discussed, highlighting their potential impact on various sectors.
Key Terms
Algorithms: which are a set of rules or processes to solve a specific problem or task.
Artificial Intelligence: The ability of a computer or machine to think and learn, and mimic human behavior.
Augmented reality (AR): Enhances one’s view of the real world with layers of digital information added to it. With AR there is no created scenario; instead, an actual event is being altered in real-time.
Autonomous Technologies: Autonomous robots and vehicles that work by combining software, sensors, and location technologies. Devices that can operate themselves.
Chat-bots: are computer programs that use AI and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.
Collaborative Technology: To share data for mutual benefit. Some of this sharing can be done passively and other data can be reported actively.
Deep Learning (DL): A subset of Machine Learning – Deep Learning refers to when computers can solve more complex problems without human intervention.
Emerging technology: includes new technology and technology that is continuously evolving.
Expert Systems (ES): Designed to emulate the human ability to make decisions in specific contexts and have had a large impact in the world of AI.
Extended Reality or XR: XR is an umbrella term that covers all forms and combinations of real and virtual environments. This includes augmented reality (AR), virtual reality (VR), and a combination of the two or mixed reality (MR).
Intelligent Agents: Process the inputs it receives, and makes decisions/ takes action based on that information.
Internet of Things: The idea of physical objects being connected to the Internet, embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data.
Loop: a part of software code that allows a command to run again and again.
Machine Learning (ML): is a technique that is used by an AI system to analyze data, find patterns, and make decisions automatically or with minimal human support.
Nanobot: Is a robot whose components are on the scale of about a nanometer, which is one-billionth of a meter. While still an emerging field, it is showing promise for applications in the medical field.
Natural Language Processing (NLP): Allows computers to extract meaning from human language. Natural Language Processing’s goal by design is to read, decipher, and comprehend human language.
Reinforcement Learning: A form of Machine Learning in which a machine is given unlabeled data and must find its own connections through analysis, clustering, and identifying patterns.
Robots: These are automated machines that can execute specific tasks with very little or no human intervention and can accomplish tasks with both speed and precision.
Supervised Learning: A form of Machine Learning in which data is labeled and categorized into groups by humans and algorithms are used to classify data based on labels.
Unsupervised Learning: The trial and error-based Machine Learning method in which a machine learns from mistakes and improves upon them.
Virtual Reality (VR): Computer interaction in which a real or imagined environment is simulated. This allows users to both interact with and alter that reality within the environment.
Wearable Technology: A category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness.
End of Chapter Discussions
1. Reflecting on today’s technology landscape, what do you perceive as one of the most significant impacts?
2. Considering the chapter’s content, what are the potential benefits and drawbacks associated with the integration of robots in business operations?
3. Can you identify instances in the restaurant industry where technology is replacing human interaction?
4. What is the difference between XR, AR, and VR?
5. What is artificial intelligence, and what are some of the capabilities?
Chapter Attributions:
About this text is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License unless otherwise stated:
This chapter was remixed from the following sources:
OER (1 of 4): Information Systems for Business and Beyond (2019) by David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted. Bourgeois 2019 book was initially developed in 2014 by Dr. David Bourgeois as part of the Open Textbook Challenge funded by the Saylor Foundation. The 2019 edition is an update to that textbook. https://digitalcommons.biola.edu/open-textbooks/1/
OER (2 of 4):Information Systems for Business and Beyond Copyright © 2022 by Shauna Roch; James Fowler; Barbara Smith; and David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
Chapter summaries, key terms, chapter learning outcomes, 7.2, and introduction authored by Gabrielle Brixey MBA, MC at West Hills College Coalinga.
This content is aggregated and remixed under the Creative Commons Attribution-NonComercial 4.0 International License unless otherwise stated, by West Hills College Coalinga, January 2024, with summaries and curation provided by Gabrielle Brixey MBA, MC.
8.10.4 Attributions
8.1 Information Systems for Business and Beyond (2019)-Chapter 13 & Chapter 4 by David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.
8.2 Module 1: Introduction to Artificial Intelligence from Thriving in the Age of Artificial Intelligence by the Faculty of Engineering and Applied Science at Ontario Tech University under a Creative Commons Attribution license, unless otherwise noted.
8.2 The History of Artificial Intelligence by Rockwell Anyoha, Harvard University SITN Boston is licensed under CC BY-NC-SA 4.0
8.3 Module 1: Introduction to Artificial Intelligence from Thriving in the Age of Artificial Intelligence by the Faculty of Engineering and Applied Science at Ontario Tech University under a Creative Commons Attribution license, unless otherwise noted.
8.4 Module 1: Introduction to Artificial Intelligence from Thriving in the Age of Artificial Intelligence by the Faculty of Engineering and Applied Science at Ontario Tech University under a Creative Commons Attribution license, unless otherwise noted.
8.5 Module 1: Introduction to Artificial Intelligence & Module 2: Machine Learning and Deep Learning from Thriving in the Age of Artificial Intelligence by the Faculty of Engineering and Applied Science at Ontario Tech University under a Creative Commons Attribution license, unless otherwise noted.
8.6 Autonomous Technologies from Information Systems for Business and Beyond (2019)-Chapter 13 & Chapter 4 by David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.
8.6 Expert Systems from Protecting artificial intelligence IPs: a survey of watermarking and fingerprinting for machine learning by F. Regazzoni, P. Palmieri, F. Smailbegovic, R. Cammarota, and I. Polian, licensed under CC BY 4.0
8.6 Advantages and Disadvantages of Robots from Robots with Artificial Intelligence and Spectroscopic Sight in Hi-Tech Labor Market by Evgeniy Bryndin is licensed under a Creative Commons Attribution 4.0 License unless otherwise noted.
8.8 Information Systems for Business and Beyond (2019)-Chapter 13 & Chapter 4 by David Bourgeois is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.
8.9 Module 1: Introduction to Artificial Intelligence & Module 5 Societal and Ethical Impacts of AI from Thriving in the Age of Artificial Intelligence by the Faculty of Engineering and Applied Science at Ontario Tech University under a Creative Commons Attribution license, unless otherwise noted.
This text is a remixed OER licensed under Creative Commons Attribution-Non Commercial-Share and Share a like 4.0 International License.