Just like Babelfish, we try to read what the other person is REALLY trying to say. Spoken language is just a small part of it, albeit the one which is the easiest for others to (mis)interpret. But humans also speak for instance with facial expression, body posture, gaze direction, and pheromones.
Our approach consists of reading the emotions of other people. It analyses their personality characteristics from their word usage, social networking dynamics, facial expressions, and body language. Towards that goal we have developed our own personality model, made up of different personas (personality archetypes). We developed the groupflow personas (bee/ant/leech) and alternative reality personas (fatherlander/nerd/spiritualist/treehugger). We also computethe FFI characteristics (openness, conscientiousness, extroversion, agreeability, neuroticism), Schwartz values (tradition, achievement, power, benevolence), Haidt moral foundations (care, fairness, loyalty, authority, sanctity), and DOSPERT risk attitudes (financial, health, recreational, ethical, social risk taking) from the words people use, but also from their facial expression in response to provocative movies, or simply by observing how they move their body, or the tone and pitch of their voice.
Our tools provide both a lens and a mirror. Similar to the magic mirror of the evil stepmother of Snow White, we provide a virtual mirror to a person, that tells them their beauty in comparison to others. It tells them what others think about them and how much they trust them, so they can improve themselves. It also provides a lens to look at others to gain insights about how they see me.
While we have focused for the last twenty years mostly on the dynamics of online interaction using text and online social media, our team recently has developed a series of new video and audio analysis tools to offer a magic mirror for face-to-face interaction:
1. Measuring body entanglement developed by Josephine Van Delden
2. Measuring turn-taking or diarization developed by Tobias Zeulner
3. Using gaze-detection to measure its direction developed by Moritz Mueller
4. Measuring emotions using multimodal approaches developed by Jakob Kruse
The objective is to build and test a real-time magic mirror that analyses teams collaborating face-to-face, and gives team members real-time feedback on the quality of their interaction, and how they can improve their teamwork to get into groupflow. It combines the tools and methods (1) to (4) described above. The goal is - as our and the research of many others has shown - to become a better team player by being more conscientious, caring, collaborative, and creative!
What if we could build a Babelfish to talk not just to humans, but also to animals? In fact, at least in humble beginnings, we can. We can apply similar AI algorithms we developed to read human emotions to reading emotions of animals. In past research we have measured the emotions of dogs, cats, horses, and cows based on their facial expression, body posture and voice.
In further work we want to extend this to reading the intents of animals (defining a list of actions the animal wants to do), analyzing for example interaction between a dog and a human, or two dogs solving a task. This will give a similar magic mirror to dog owners interacting with their dog, helping them to communicating their intent to the animal by getting immediate feedback.
Until recently, the vegetative nature of the plant as laid out over 2000 years ago in Aristotle’s treatise “On the soul” was recognized as accepted wisdom. But recently, it has been shown that far from just reproducing and growing, plants show awareness of their environment, communicating with numerous senses. Tomato plants produce sounds when stressed, mustard plants respond to the munching sounds of caterpillars by producing defensive chemicals, mimosas learn not to respond to harmless shaking, to name just a few examples. In our own research we measure the potential differences of plants such as basil or mimosas between their leaves and their roots. This is similar to the action potentials occurring in the brains of animals, which are responsible for the signaling of information in the body.
In our work we are using the plant spiker box to measure “emotions of plants”, i.e., potential differences in response to interaction with humans. In our work we have trained machine learning models to predict emotions of humans interacting with basil, mimosas, and garden plants. In new work we are building machine learning models which will, based on the plant spikerbox, recognize human movement, emotion, and different individuals.