Meet Geoffrey Hinton, the Man Google Hired to Make AI a Reality

...the power to actually understand what their users are saying

Josh Valcarcel/WIRED

DANIELA HERNANDEZ | Wired

‘I get very excited when we discover a way of making neural networks better — and when that’s closely related to how the brain works.’ — Geoffrey Hinton

Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram. To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way.

  • Rather than keeping them in a single location, it spreads them across its enormous network of neurons. This may seem like a small revelation, but it was a key moment for Hinton — “I got very excited about that idea,” he remembers. “That was the first time I got really into how the brain might work” — and it would have enormous consequences.

  • Inspired by that high school conversation, Hinton went on to explore neural networks at Cambridge and the University of Edinburgh in Scotland, and by the early ’80s, he helped launch a wildly ambitious crusade to mimic the brain using computer hardware and software, to create a purer form of artificial intelligence we now call “deep learning.”

For a good three decades, the deep learning movement was an outlier in the world of academia. But now, Hinton and his small group of deep learning colleagues, including NYU’s Yann LeCun and the University of Montreal’s Yoshua Bengio, have the attention of the biggest names on the internet.

  • Hinton works part-time for Google, where he’s using deep learning techniques to improve voice recognition, image tagging, and countless other online tools.

  • LeCun is at Facebook, doing similar work.

  • Artificial intelligence is suddenly all the rage at Microsoft, IBM, Chinese search giant Baidu, and many others.

While studying psychology as an undergrad at Cambridge, Hinton was further inspired by the realization that scientists didn’t really understand the brain.

  • They couldn’t quite grasp how interactions among billions of neurons gave rise to intelligence.

  • They could explain how electrical signals traveled down an axon — the cable-like protrusion that connects one neuron to another — but they couldn’t explain how these neurons learned or computed.

For Hinton, those were the big questions — and the answers could ultimately allow us to realize the dreams of AI researchers dating back to the 1950s. He doesn’t have all the answers yet. But he’s much closer to finding them, fashioning artificial neural networks that mimic at least certain aspects of the brain. In Hinton’s world, a neural network is essentially software that operates at multiple levels. He and his cohorts build artificial neurons from interconnected layers of software modeled after the columns of neurons you find in the brain’s cortex — the part of the brain that deals with complex tasks like vision and language.

  • These artificial neural nets can gather information, and they can react to it.

  • They can build up an understanding of what something looks or sounds like.

  • They’re getting better at determining what a group of words mean when you put them together.

  • They can do all that without asking a human to provide labels for objects and ideas and words, as is often the case with traditional machine learning tools.

As far as artificial intelligence goes, these neural nets are

  • fast,

  • nimble

  • efficient.

  • they scale extremely well across a growing number of machines, able to tackle more and more complex tasks as time goes on.

It’s not just the tech giants who are joining the movement. We’ve seen a slew of deep learning startups, including companies like Ersatz, Expect Labs, and Declara. Where will this next generation of researchers take the deep learning movement?

The big potential lies in deciphering the words we post to the web — the status updates and the tweets and instant messages and the comments — and there’s enough of that to keep companies like Facebook, Google, and Yahoo busy for an awfully long time.

The aim to give these services the power to actually understand what their users are saying — without help from other humans. “We want to take AI and CIFAR to wonderful new places,” Hinton says, “where no person, no student, no program has gone before.”

Read the whole story at Wired