Learning with Social Robots
This project leverages emerging technologies in social robotics with recent findings from social, developmental, and cognitive psychology in an effort to design and initially implement/evaluate a new generation of robots that is capable of interacting with and instructing young learners in a truly social way.
Given growing recognition that the social aspects of children’s environments are central to their ability to learn rapidly and efficiently, we are developing a robot that can use social cues (e.g., motor mimicry and synchrony, affective cues, gaze direction) to help direct children’s learning. Indeed, our initial work demonstrates how the human mind implicitly responds to social cues emitted by a robot in the same was as it does to similar cues emitted by a human.
The goal is to maximize the social repertoire of a robotic system so that it can function not just as a disseminator of information, but also as an interlocutor incorporating social signals to which the learning mind is automatically attuned. We are exploring how that repertoire can be fine-tuned so as to better engage learners from different social groups.
This project was funded by an NSF Cyberlearning grant.
Children ranging from 3-5 years were introduced to two anthropomorphic robots. Each child was invited to discuss his or her favorite animal with the robots, then each robot shared information about its own favorite animal (which were both unfamiliar to the child). Then we tested children’s recall of the information provided — could they recognize which animal was each robot’s favorite? Could they recall the name of the animal or anything the robot said about the animal?
Then we showed the child a novel, unfamiliar animal. The child was asked to pick one robot to ask about this novel animal. After the child chose, however, both robots provided answers. The child was then asked to pick which robot they believed. Finally, we asked the child a few questions about how much they liked playing with the robots.
The variable we manipulated was the two robots’ behavior. Both robots produced nonverbal movements throughout the interaction that are typical in ordinary human face-to-face interaction, such as head movements, gaze shifts, and facial expressions. However, one robot attended to the child in a contingent fashion (e.g., signaled via head and gaze orientation and timely backchanneling). The other robot’s attention was not contingently directed.
We found that the children treated the robots as interlocutors. They supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information.
Consistent with studies of children’s early sensitivity to an interlocutor’s nonverbal signals, children were especially attentive and receptive to whichever robot displayed the greater nonverbal contingency. They were more likely to ask the contingent robot about the novel animal, and also believed the contingent robot’s answer more often. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse.
The following people collaborated with us on this project:
Paul Harris – Graduate School of Education, Harvard University
David DeSteno – Dept. of Psychology, Northeastern University
Leah Dickens – Dept. of Psychology, Northeastern University
Publications
- Breazeal, C., Harris, P., DeSteno, D., Kory, J., Dickens, L., & Jeong, S. (in press). Young children treat robots as informants. Topics in Cognitive Science. [PDF]
- Kory, J., Jeong, S., & Breazeal, C. L. (2013). Robotic learning companions for early language development. In J. Epps, F. Chen, S. Oviatt, & K. Mase (Eds.), Proceedings of the 15th ACM on International conference on multimodal interaction, (pp. 71-72). ACM: New York, NY. [PDF]