These toys don't just talk, they converse
Above you can see the toys instrumented with radios and a screen shot of the artificial intelligence generating lines to be spoken by characters through the laptop speakers.
Toys2Life is creating an artificial intelligence enabled platform that allow dolls and action figures to talk to one another. Children can control the conversation by moving dolls closer and farther from one another to select who talks to whom. A low cost, Bluetooth Low Energy
(BLE) based real time proximity matrix is used to track the dolls. A tablet, laptop, or smartphone provides the intelligence and audio database.
The benefits to kids and hence society will be twofold: the first is FUN; the second an outlet for creative, intrinsically-motivated learning. These benefits come from Toys2Life’s innovation in artificial intelligence. By working at a sentence level, rather than a word level, and using emotion and intent extraction and a probabilistically generated combinatoric dialog space.
Significantly unique interactions can be had between characters with a much more limited supply of phrases.
Early play tests with the prototype have shown promise. Even though kids may not know what the “Internet of Things” is, they enjoy playing on it. Patents on the artificial intelligence (AI) and BLE innovations have been filed. This far, children have showed great interest in creating their own characters where they can learn about the mechanics of language, the emotional content
of language, and the process of programming.
Minecraft, like Legos, tinker toys, and erector sets before, has shown that when kids are given a good platform to create learning can be fun (Junco 2014). Toys2Life takes the concept of a child writing a story to the next level of creating a character, that can interact with other characters, both those provided by Toys2Life and those created by other children.
Too many educational toys fall into the no man’s land between fun enough for playing and educational enough for learning. Toys2Life is an infinite game where players adjust the rules to continue play as opposed to a finite game with winners and losers who are working toward an end goal like most modern video games. The pull string based Chatty Cathy dolls of the sixties have given way to toys including an audio chip that do the same dozen phrases digitally. The latest crop of smart toys is attempting to talk and interact with the child directly. Interacting directly with a child is a very difficult problem, due to issues of speech recognition with children, the limited emotional content of synthetic speech, the limited content available with voice actor speech, the AI required for unconstrained interaction and most difficult, the high expectation level children have of a digital toy that talks to the child about whatever topic the child may wish to talk about.
In the more constrained environment of toys that talk to one another, a much smaller collection of pre-recorded phrases can provide a large combinatorial set of interactions. Using Toys2Life’s patent pending “dialog simulation” different levels of context can be used. In a context free unscripted environment characters created without knowledge of one another can converse and interact.
On the other end of the spectrum, fully scripted interactions can be woven together into a sort of “choose your own adventure” experience where the child not only uses the proximity control of the dolls or action figures to control who talks to whom, but makes decisions affecting the thread of the story through a branching narrative tree. Users can start with the simple context free content generation and control their character as it interacts with other characters. From there, they can move on to telling the story they want to tell with their character using single sided dialog as they identify the emotional triggers in their story that should elicit appropriate responses from the other side. Finally users can dive deep and create their own branching dialog tree to create their own experience that sits between that of a movie, a video game, and doll / action figure play.
The innovation behind the Toys2Life system includes Bluetooth low energy (BLE) radio innovation, artificial intelligence (AI) innovation, and innovation in applying these technologies to education. The radios give proximity of one toy to the others. The proximity is controlled by the child. The proximity matrix data stream is fed into the dialog engine. The AI in the dialog engine chooses the characters to converse, based on proximity and history, then chooses the dialog model to converse through, finally the AI in the dialog engine chooses probabilistically the phrases that each character will say for the current dialog model.
It is the tagging feature in the dialog engine that will allow children to learn about the emotional content of language. Kids will study example phrases with tags like “Angry_Response”, “Disgusting_Statement”, or “Joyful_Response” to understand how to create their own “Angry_Response” that might follow an “Insult” or their own “Statement_Of_Regret” that might elicit a “Sympathy” or “Reassurance” response.
To create a character kids will learn about STEAM topics such as writing, and set theory as they determine which phrases can be “greetings”, “exclamations”, or both? They will learn about computational thinking using variables and probability as they create phrases to be picked by the probability engine based on their weighting selections for different tags. They will also learn about grammar and syntax from an English perspective as they write substitutable phrases, and from a programming perspective as they use a template to create a traversable phrase tree to send their character on a scripted choose-your-own-adventure.
To create content for the Toys2Life system, writers write dialog models that allow for various combinations of phrase types to be traded back and forth in small conversations. Within the dialog engine the writers don't have to know one another or know about the other writer's characters to allow the characters to talk to one another. Writers simply create phrases for their character that match some of the phrase types used in the dialog models.
The toys can be dolls, action figures, figurines, plush toys, trucks, boats, cars, or trains. Some of the toys may be stationary objects such as a doll house, school, police station, or castle. Stationary toys may be included in the real time proximity map just to help the dialog engine sequence dialog models involving locations or they may also have a character and a voice associated with them.
As additional phrase types are added, existing characters may be updated to participate in new dialog models that use a new phrase type if the character has existing phrases that could also be classified with a non-zero weighting for the new phrase type.
Contact Isaac Davenport for more information: