Post date: 06-Aug-2014 10:22:50
Earlier this year, a "chatbot" won a Turing Test competition organized by a U.K. university. But it soon became obvious that all that this particular contest had shown was that a piece of software had gotten fairly adept at fooling humans. Now a new artificial intelligence contest is offering a US $25,000 prize to an AI that can successfully answer what are called Winograd schemas, a type of specially constructed question that is easy for a human to answer but a serious challenge for a computer. Can this approach become a better way of testing for human-level AI?
read more @ http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/winograd-schemas-replace-turing-test-for-defining-humanlevel-artificial-intelligence/?utm_source=roboticsnews&utm_medium=email&utm_c