Artificial Intelligence (AI) can take form in a variety of different ways. At its core, AI is a computer system that is programmed to learn from the data it analyzes and thus perform actions based on that knowledge. In much simpler terms, it has been described as, “anything a computer can do that formerly was considered a job for a human,” (Greenwald 2018). Artificial Intelligence is the operator of personalized ads on social media, search engines like google, or voice-command options such as Siri and Alexa. The many forms of Artificial Intelligence are depicted below in figure 1.
In the past decade, the capabilities of Artificial Intelligence have grown tremendously. Ideas regarding AI began surfacing in the 1940’s, when mathematician Alan Turing introduced the theory of computing (Greengard, 2019). The theory proposed that certain algorithms could be created which machines could essentially build off of, thus eliminating human-like thinking.
The earliest attempts at physically creating AI began through sequencing codes which aimed to emulate the same problem-solving abilities as humans (Greenwald 2018). Essentially, numerical data was formed to represent human actions, and the data is then inserted into the code in attempts to direct computers to act in similar ways as humans. These codes basically instruct the computer how to interpret the data to produce the desired result.
The problem with this technique and the cause for its failure was due to the simple fact that humans are complex. It is not that simple to encode instructions for the entire range of human possibilities (Greenwald 2018). In essence, the program was too simple to predict and account for the actions of such a diverse species.
Despite attempts at design creations relating to Artificial Intelligence, the term was not officially coined until 1956 by researchers at Dartmouth College (Greengard 2019). The team of researchers included Marvin Minsky (computer scientist), Claude Shannon(information theorist), and Nobel prize winners Herbert Simon, John Nash. Together, they were able to create a program which taught computers to play checkers with a higher success rate than a human could achieve.
It was with the production of the internet, that AI capabilities were able to skyrocket, as mass amounts of data became readily available. This made the progression of AI much easier as programmers no longer had to create the instructions for the computers to follow, the computers could analyze the data themselves and make determinations as to what course of action to take.
At this point, Artificial intelligence slowly began to blossom, as in 1997, the International Business Machines Corporation (IBM) developed Deep Blue, a chess-playing supercomputer (Greengard 2019).
It was not until 2011, however, that techniques became much easier. In that year, IBM introduced Watson, a supercomputer with highly advanced AI capabilities that far exceeded those of Deep Blue. This system became famous for its ability to beat two famously-skilled players from the game show ‘Jeopardy!’ (Greengard 2019). While similar to the program created
The process of creating an AI that follows certain procedures is done in only two steps, as explained by Greenwald (2018), the first step being training. To train AI, it is given a set of labeled data which it will then learn from. The programmer then gives additional data to the AI, some that are related to the first set of data and some not. Greenwald used the example of cats, where the first set of data given for training were all photos of cats, the second set would include cats as well as other miscellaneous photos. At this point, Artificial Intelligence should be able to differentiate between the photos of cats versus all other photos. This second step is known as inferencing, and it shows the computer producing new information based on the prior knowledge given to it, which is the sole idea of Artificial Intelligence (Greenwald 2018).