By Ashish Sanon
Here's a chronological description of the development of Artificial Intelligence over time:
In the first half of the 20th century, the concept of artificially intelligent robots began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis
1950 - Alan Turing, British polymath in his logical framework paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.
1955 - Proof of concept, considered first AI program: Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist, was a program designed to mimic the problem solving sakills of a human; funded by Research and Development (RAND) Corporation. Presented at DSRPAI next year
1956 - The term “artificial intelligence” was first coined by John McCarthy at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by McCarthy and Marvin Minsky.
1957 - 1974 AI began to flourish:
Newell &Simon’s “General Problem Solver” - problem solving tasks
Joseph Weizenbaum’s “ELIZA” - interpretation of language
Defense Advanced Research Projects Agency (DARPA) to fund AI research in machines that could transcribe and translate spoken language as well as high throughput data processing.
1958- The perceptron algorithm was invented at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers
1974 - 1980 - Due to lack of computational power, slow processing speeds and not enough memory, patience dwindled & so did the funding, and research came to a slow roll for ten years.
1980’s - AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds.
John Hopfield and David Rumelhart - “deep learning” techniques which allowed computers to learn using experience
Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert
1982 - 1990: Japanese government heavily funded $400M in expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP), with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence.
1997 - Gary Kasparov was defeated by IBM’s Deep Blue, which served as a huge step towards an artificially intelligent decision making program.
1997 - Dragon Software incorporated into Windows - speech recognition software
2000 - Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.
2000: The Y2K problem, also known as the year 2000 problem, was a class of computer bugs related to the formatting and storage of electronic calendar data beginning on 01/01/2000. Given that all internet software and programs had been created in the 1900s, some systems would have trouble adapting to the new year format of 2000 (and beyond). Previously, these automated systems only had to change the final two digits of the year; now, all four digits had to be switched over – a challenge for technology and those who used it.
2000: Honda releases ASIMO, an artificially intelligent humanoid robot.
2002: i-Robot released Roomba, an autonomous robot vacuum that cleans while avoiding obstacles.
2004: NASA's robotic exploration rovers Spirit and Opportunity navigate Mars’ surface without human intervention.
2004: Sci-fi film I, Robot, directed by Alex Proyas, is released. Set in the year 2035, humanoid robots serve humankind while one individual is vehemently anti-robot, given the outcome of a personal tragedy (determined by a robot.)
2006: Oren Etzioni (computer science professor), Michele Banko, and Michael Cafarella (computer scientists), coined the term “machine reading,” defining it as unsupervised autonomous understanding of text.
2007: Computer science professor Fei Fei Li and colleagues assembled ImageNet, a database of annotated images whose purpose is to aid in object recognition software research.
2009: Google secretly developed a driverless car. By 2014, it passed Nevada’s self-driving test.
2010: ImageNet launched the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), their annual AI object recognition competition.
2010: Microsoft launched Kinect for Xbox 360, the first gaming device that tracked human body movement using a 3D camera and infrared detection.
2011: Watson, a natural language question answering computer created by IBM, defeated two former Jeopardy! champions, Ken Jennings and Brad Rutter, in a televised game.
2016: A humanoid robot named Sophia is created by Hanson Robotics. She is known as the first “robot citizen.” What distinguishes Sophia from previous humanoids is her likeness to an actual human being, with her ability to see (image recognition), make facial expressions, and communicate through AI.
2016: Google released Google Home, a smart speaker that uses AI to act as a “personal assistant” to help users remember tasks, create appointments, and search for information by voice.
2017: The Facebook Artificial Intelligence Research lab trained two “dialog agents” (chatbots) to communicate with each other in order to learn how to negotiate. However, as the chatbots conversed, they diverged from human language (programmed in English) and invented their own language to communicate with one another – exhibiting artificial intelligence to a great degree.
2017 - Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie
2018: Alibaba (Chinese tech group) language processing AI outscored human intellect at a Stanford reading and comprehension test. The Alibaba language processing scored “82.44 against 82.30 on a set of 100,000 questions” – a narrow defeat, but a defeat nonetheless.
2018: Google developed BERT, the first “bidirectional, unsupervised language representation that can be used on a variety of natural language tasks using transfer learning.”
2018: Samsung introduced Bixby, a virtual assistant. Bixby’s functions include Voice, where the user can speak to and ask questions, recommendations, and suggestions; Vision, where Bixby’s “seeing” ability is built into the camera app and can see what the user sees (i.e. object identification, search, purchase, translation, landmark recognition); and Home, where Bixby uses app-based information to help utilize and interact with the user (e.g. weather and fitness applications.)
2019-2020:
Chatbots + virtual assistants: Strengthened chatbot and virtual assistant automation for heightened user experience
Natural language processing (NLP): Increased NLP abilities for artificially intelligent apps, including (and especially for) chatbots and virtual assistants
Machine Learning and Automated Machine Learning: ML will shift toward AutoML algorithms to allow developers and programmers to solve problems without creating specific models
Autonomous vehicles: Despite some bad press surrounding various faulty self-driving vehicles, it’s safe to assume there will be a stronger push to automate the process of driving products from point A to point B to:
a. Save on the cost of human labor
b. Optimize the process of purchase-shipment-arrival to consumer via self-driving vehicles that – in essence – won’t get tired behind the wheel
2020-2021:
AI and artificial intelligence software, specifically the solution of machine learning as a service (MLaaS). Similar to software as a service (SaaS) or infrastructure as a service (IaaS), MLaaS is when software vendors supply prebuilt machine learning technology solutions to other businesses to embed within their business or applications.
Robotic process automation (RPA) software development. The ability to automate tedious, time-consuming tasks, or assist employees with the use of supervised automation, is a massive benefit to businesses. Companies can save employees time and ensure that processes are conducted properly. estimated to grow to $2.1 billion by 2021
Big Data requires machine learning data catalogs (MLDCs):
Big data is a critical aspect of digital transformation, as is leveraging data to improve business decision-making. Because companies have put such an emphasis on becoming data-driven organizations, they have empowered their employees to access huge data sets. Often, businesses do this through self-service business intelligence - or self-service BI, applications.
2021-2022: Explosion of AI development: More updates coming soon!!!
References:
https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
https://www.g2.com/articles/history-of-artificial-intelligence
https://www.livescience.com/49007-history-of-artificial-intelligence.html
https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf