Learning with a socially assistive robot
This project had two main goals. First, we wanted to test whether a socially assistive robot could help children learn new words in a foreign language more effectively by personalizing its affectibe feedback. Second, we wanted to demonstrate that we could create and deploy an fully autonomous robotic system at a school for several months.
We created a socially assistive robotic learning companion to support English-speaking children’s acquisition of a new language (Spanish). In a two-month microgenetic study, 34 preschool children played an interactive game with a fully autonomous robot and the robot’s virtual sidekick, a Toucan shown on a tablet screen. Two aspects of the interaction were personalized to each child: (1) the content of the game (i.e., which words were presented), and (2) the robot’s affective responses to the child’s emotional state and performance. We found that child learned new words and that affective personalization led to greater positive responses from the children.
We developed an integrated experimental paradigm in which children played a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion.
The supportive affective behavior of a robotic tutor is autonomously learned and personalized to each student over multiple interactive tutoring sessions.
The system had four main parts:
- A fully autonomous social robot platform, Tega, which was specifically designed to be engaging for children, is robust enough to work continuously for several hours, and is portable in order to be deployed in the field;
- An educational Android tablet app designed specifically for this itneraction that allows for general curriculum generation and seamless integration with the social robot;
- An Android smartphone that uses the commercial Affdex SDK to automatically analyze facial expressions in real-time; and
- A cognitive architecture that integrates and feeds affective information from Affdex and educational information from the tablet into an affective reinforcement learning algorithm, which determines the social robot’s verbal and non-verbal behavior.
We conducted a 2-month microgenetic study in three “special start” preschool classrooms at a public school in the Greater Boston Area. Thirty-four children ages 3–5, with 15 classified as special needs and 19 as typically developing, participated in the study. The children played an interactive game individually with an autonomous socially assistive robot and a virtual agent situated on a tablet. The game was designed to support second language acquisition. The robot and the virtual agent each took on the role of a peer or learning companion and accompanied the child on a make-believe trip to Spain, where they learned new words in Spanish together.
The study took place over 9 sessions. The first session was used for initial assessments. During each of the next seven sessions, each child played the language learning game individually with the robot for about 10 minutes. During the last session, children said goodbye to the robot, and we performed posttests.
The robot interaction was designed as an activity that could take place during “Choice Time” at a preschool, to supplement existing curricula around language learning. Choice Time is a period of time during the school day present in many American preschools, during which children get to select one of several available activities to do. We found that this can be an ideal time for robot interactions targeted at individual children or small groups, as the robot is presented as just another activity in the classroom.
We found that children learned at least some of the words presented during the interaction. Specifically, the words most frequently learned were those repeated most often during the interaction, such as “blue” and “monkey.”
The affective policy did not converge. This is not surprising due to the fact that each session lasted only several minutes, while affective interactions are extremely complex and dynamic, and a policy governing them may take a long time to learn. We did see, however, that the robot personalized to each child, learning a different set of mappings from child states to robot actions for each child.
Finally, we saw that children’s valence changed with the robot’s affective personalization:
- Kory Westlund, J. M., Gordon, G., Spaulding, S., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2016). Lessons From Teachers on Performing HRI Studies with Young Children in Schools. In S. Sabanovic, A. Paiva, Y. Nagai, & C. Bartneck, Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction: alt.HRI (pp. 383-390). IEEE.
- Gordon, G., Spaulding, S., Kory Westlund, J., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2016). Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills. Proceedings of the 30th AAAI Conference on Artificial Intelligence. AAAI: Palo Alto, CA.
- Kory Westlund, J.*, Gordon, G.*, Spaulding, S., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2015). Learning a Second Language with a Socially Assistive Robot. In Proceedings of New Friends: The 1st International Conference on Social Robots in Therapy and Education. (*equal contribution).