Previously, I mentioned how a chatbot designed for children has to treat its interactions fundamentally differently than one made for adults. The exigence of a communication between adult and robots, in most cases, “I need help” or “I was re-directed here instead of human support”, is different from the exigence of most child-robot communications, where a child can’t be reasonably expected to try to get anything out of what he or she probably sees as a conversation with a robotic friend. However, this makes the job of a child-oriented chatbot all the more challenging when attempting to deal with or otherwise account for emotional issues of a child.
Of course, this somewhat applies to normal chatbots. One previous example was Woebot, aimed at psychological health. However, the website mentions that Woebot establishes “a bond with users that appears to be non-inferior to the bond created between human therapists and patients.” This implies that at least in part, Woebot gauges emotion due to the patient explicitly stating his/her emotions as would happen in a therapist / patient relationship. Indeed, the exigence of the bot is being downloaded specifically for the purposes of mental health.
Child-oriented chatbots wouldn’t have this same luxury. Even disregarding the fact that not many children I know can adequately express their feelings if they wanted to, if a chatbot adopts a persona of a friend or mentor, it would be more difficult to establish a need to express feelings since children wouldn’t talk to the bot in a non-casual way. While a chatbot can always just ask “how are you feeling?”, this most likely wouldn’t yield accurate results all of the time (imagine asking this question yourself). Instead, a chatbot would have to imply emotions based on the language used.
Given adequately labelled data, natural language models can identify both stress levels and emotion in text. However, it’s unclear if the same method used in the study can be used for the language of young children, especially since with a decreased vocabulary (meaning less emotionally-charged meanings), a lot of human ability to interpret the emotions of young children (for me anyways) is based around non-verbal cues and vocal inflections that can’t be fed into a chatbot.