Click For Photo: https://www.popsci.com/sites/popsci.com/files/styles/opengraph_1_91x1/public/images/2018/03/charcoal_echo_living_room.jpg?itok=cs1p_2VxClick For Photo: https://www.popsci.com/sites/popsci.com/files/styles/655_1x_/public/images/2018/03/charcoal_echo_living_room.jpg?itok=RMCxUMs9
We all know that virtual personas like Alexa, Siri, and the Google Assistant are not real humans. They can’t laugh the way we laugh, because they are not alive. But it does, in fact, makes sense for them to try to be social and emulate our behavior. They just need to get it right. Telling jokes, for example, is not an essential skill for an assistant like Alexa to have, but it will still cough one up if you ask.
“We’re social beings—we should build machines that are social as well,” says Timo Baumann, a systems scientist in the language technologies institute at Carnegie Mellon University. He researches the interactions that occur between people and virtual assistants, and between people and other people. “Our research shows that it’s super important to try to build a relationship with a user,” he adds. Ideally, there should be a rapport there.
Carnegie - Mellon - Example - Baumann - Agent—an
At Carnegie Mellon, for example, Baumann tested a virtual agent—an animated one that participants could both speak with and see on a screen. Its job was to give movie recommendations, and there were two versions: one was programmed to be more social than the other. “They [the study participants] liked this more social system significantly more, and they liked the movie recommendations significantly more,” he says. That was true even though they hadn’t changed the part of the system that made the film suggestions—it’s just that the more social persona resulted in higher ratings.
Of course, anyone who has interacted with an assistant like Alexa or Siri knows that their social chops are works in progress. “The critical problem is that being social is super difficult,” Baumann says—the best they...
Wake Up To Breaking News!