Since its early days with Alan Turing, AI has taken a huge leap in its advancement. There is now a wide array of different technologies, from voice recognition to biometrics, and even automation. There are also newer technologies, such as natural language generation, which takes data and puts it into natural language, and virtual agents, such as Google assistant and the IBM Watson (Gundavajyala).
In the 1950’s, the idea of AI was only shallowly explored by the tin man in The Wizard of Oz. Some became fascinated with the idea, so they held the Dartmouth Summer Research Project on Artificial Intelligence in 1956. Here, they coined the term “artificial intelligence” but didn’t get anywhere due to a lack of sufficient hardware. From 1957 to 1974, AI improved rapidly with new inventions and the public’s approval. It died down later, but didn’t slow down, instead making new inventions, such as Google and the chess bot Deep Blue. One reason is that with enough computing power, almost anything is possible, and as Moore’s law states, computing power and space doubles every year (Anyoha).
AI will be powerful, but we can’t currently contain what’s to come. One potential solution is quantum computing. Quantum computing is a new technology that uses 1’s and 0’s and puts them in a state of superposition, not one or the other. This allows the system to have more options than just one or zero. With this, we could help AI develop faster and solve problems more efficiently than basic decision trees (Dilmegani).