Cora is what we call a digital human. She was created by Soul Machines, a New Zealand company that makes human-like avatars for computers. These avatars not only look like us but act like us as well. You’ve probably heard of driverless cars or automated assistants. These intelligent technologies are going to become increasingly widespread in the future, and Soul Machines believe that we’ll get more out of them if they act more like humans and less like robots. “Human beings are wired to engage on a face-to-face basis,” says the chief business officer of Soul Machines, Greg Cross. “The human face plays a very important role in the way we communicate. At Soul Machines, we are literally putting a face on artificial intelligence and putting a face on the machines of the future.” And it’s not just about looks. Soul Machines make computers that talk and respond to you in the way a real person might. But what exactly is artificial intelligence, or AI? And can machines really act like humans?
Artificial intelligence refers to the ability of a computer to perform tasks normally done by intelligent beings. American scientist John McCarthy came up with the term at a conference in 1956. He invited a team of researchers to discuss the possibility of computers becoming more lifelike. He believed that the way human beings learn and think could be described in such detail that a machine could be made to copy it. But how do humans think? Sometimes it can seem quite logical – like when we solve a maths problem or explain a scientific concept. But it’s not all about facts and figures. A lot of human thought is based on our emotions. Our ability to recognise and understand our emotions and those of the people around us is called emotional intelligence. While it might be easy to imagine a computer that can add numbers, what about a computer that can mimic and understand the way we feel? Greg Cross says it’s not only possible but is happening right now at companies like Soul Machines. “The digital people we create respond to you,” he says, “so if you smile at them, they’ll smile back. If you get upset, they’ll try to be sympathetic about what’s making you upset.”
In 1950, British computer scientist Alan Turing became one of the first people to consider the question “Can machines think?” To find an answer, he created the Turing test. The aim of the test was to see if a computer could trick a person into thinking it was a human. A human judge sat in a separate room and asked the same questions to a computer and to a person. Based on the answers, the judge had to decide which one was the person and which one was the computer. Turing predicted that, in the future, computers would become so smart that we wouldn’t be able to tell them apart from humans. Back then, such an idea would have seemed impossible – but it doesn’t anymore. It took more than sixty years, but in 2014, a computer program called Eugene passed the Turing test for the first time. Based on the answers it gave, Eugene successfully tricked a group of judges into thinking it was a thirteen-year-old boy from Ukraine.
For a computer to think like a human, it must need something like a human brain. But how do you build one of those? The human brain is very complex. Even the world’s best neuroscientists still have a lot to learn about how it works. At Soul Machines, a team of neuroscientists and other experts are looking at the parts of the brain we do understand. They’re trying to copy how these parts work by building a virtual nervous system. Take smiling. If you see someone smiling at you, chances are you’ll smile back. Soul Machines are building digital humans that can recognise when someone smiles at them. This then sends a signal to their digital “brains”, prompting them to smile back. As people understand more about the human brain, they’ll be able to build more sophisticated digital versions that copy these human behaviours.
“The world of the future is an incredibly exciting world,” says Greg Cross. In the not-so-distant future, he predicts that we’ll be able to build digital versions of famous people so they can talk about themselves and their work long after they’ve died. He can even imagine a world where everyday people create digital versions of themselves. Greg thinks that this digital self could do all your boring jobs, like grocery shopping or organising events, while you spend your time doing more important things. He believes that young people growing up today will get to figure out how to advance this technology and solve some of the world’s biggest problems. “I look at the next ten, twenty, and thirty years as the most exciting times to be alive,” he says.
While the future can sound exciting, there are lots of things to consider as we bring computers to life. In New Zealand, all human beings are given certain rights when they’re born, such as the right to be treated with dignity and respect. Many of these rights are expressed and protected in our laws. For example, you’re not allowed to hit another person, no matter how annoyed you might be with them. But technology doesn’t have any rights. If you’re annoyed at the TV because it doesn’t work properly, it wouldn’t be so bad if you hit it gently on the side. But what if the TV had a digital avatar inside that could talk, smile, and react like you do? Should that digital avatar have rights? Would you feel guilty if the avatar was helping you and you said something mean to it? What if it responded like a human and said something mean back? Nick Agar, a professor of ethics at Victoria University of Wellington, says he told the automated assistant on his smartphone to buzz off, and he felt guilty afterwards. He suspects he would feel much worse if the assistant actually looked and acted like a person. “If it talks like a human and does all of these things that are very human-like,” he says, “well, maybe there’s no difference between being a machine and being a human.” Will we teach children it’s OK to be rude to a digital human but not OK to be rude to a real one? If a digital human can process positive and negative emotions like we can, do they deserve all the rights and freedoms that we have? These are questions that people are asking as robots become less robotic and more like you and me.
There are also social considerations that arise from the increase of AI. We live in a world where people spend more time on computers and less time interacting face to face. Nick thinks we should decide where robots will be useful in the future and where human contact is essential. He argues that we should fight to preserve social contact as much as possible. “These days, the tricky bits of flying a plane are basically done by humans,” he says, “but many experts believe it’s only a matter of time until planes can fly themselves. Then will we need humans in the cockpit? Maybe not.” On the other hand, he enjoys walking into a café and talking to someone who is making him a coffee. What would life be like if all those people were replaced with digital humans?
While Nick is concerned about some of the possibilities of AI, he’s also excited about its upsides. “One of the great things about machine learning is that it is built for complexity,” he says. “What if we had a computer with the intelligence to find patterns in diseases that no human could ever find? Machine learning offers a potential way forward.” These technological advances might be a few years off, but there are teams of researchers, neuroscientists, psychologists, artists, and engineers working on them all over the world. And who knows, sometime soon when you log on to a website, you might see someone smiling back at you ready to help. Someone just like Cora.