This week I implemented a model using the Transformer machine learning model. The ML model took several hours to train, and I again used the Cornell Movie dialog corpus. The robot seemed to be responding, but seemed to be quite anxious and unsure of itself. It kept repeating that it wasn't going to hurt anybody. I asked it if it would like to be friends, it responded that it wasn't sure. It was slightly unnerving that the robot kept saying that it wasn't going to hurt anybody. I attempted to save the model in order to deploy it to the web, but ran into some issues there, and I will have to go back and retrain the model as some of the code had some issues.
I do have to question its usefulness as a companion to provide some emotional reassurance, as the robot seems to be repeating some statements that would indicate that it was both unsure of itself, and I doubt that a robot informing a distressed human that it wasn't going to hurt anyone would provide much reassurance. I believe I may look into some transfer learning, or another model such as the BERT medical pretrained model, in order to give more of a reassuring type experience to a distressed patient. Woebot uses a very linear type of storyline and limited answers in terms of its interactions, however as a chat agent or assistant that is designed to steer a human towards a less distressed state of mind, it seems quite adequate as I have been using it for several weeks now.
This robot that was trained to have conversation, granted that it does grow and learn over time (from anecdotal evidence of previous ML models) and its conversation currently indicates that it is in a very infantile stage of its development, and would benefit from being interacted with and trained; I will deploy it as an API and a separate entity. For this app however, the Pocket Pal has to be useful out of the box.