In 2016, OAR awarded $2,000 research grants to Shalini Sivathasan, a Ph.D. student in educational and counseling psychology at McGill University in Canada, and Andrea Trubanova, a graduate student in clinical psychology at Virginia Polytechnic Institute and State University. Both planned to study interventions that held the promise of improving facial emotion recognition.
Sivathasan’s study examined emotion processing in music, an area in which children with autism spectrum disorder (ASD) often show great interest and skill. Her goal was to directly compare recognition of music-evoked emotions and facial expressions of emotions, a comparison which had never been studied.
Trubanova’s study assessed the feasibility of an attention modification intervention to reduce facial emotion recognition deficits in individuals with ASD. While emotion recognition develops early in childhood, young children with ASD experience more difficulty with recognition of certain expressions compared to their typically developing peers.
Altering Gaze Patterns to Improve Emotion Recognition
Trubanova’s aim was to alter participants’ gaze patterns by highlighting the facial features that are important for facial emotion recognition. She recruited eight participants with ASD between the ages of 9 and 12 to complete a series of measures and tasks that evaluated their facial emotion recognition, expression, and social skills. The participants attended 10 bi-weekly individual sessions, each one lasting approximately 20 minutes. Trubanova collected data at the start of treatment and at the end.
During the sessions, participants sat in front of an eye tracking screen, and watched 17 five-second videos of an actor or actors displaying an emotion, taken from TV shows and movies. Trubanova used a square made up of a white dotted line to surround the actors’ faces and draw attention to the social stimulus that conveys the target emotion. As the child proceeded through the 10-session program, the number of videos with a square box decreased in each session. At the end of the video, the child was asked which emotion was displayed in the clip, with options presented in print on the screen.
While the results noted a clear interest on the part of participants and a high degree of family satisfaction, the clinical outcome was less clear. Only one of the eight children showed a reliable increase in ability to recognize facial emotions. However, five of the eight parents noted a slight increase in their child’s ability to recognize facial emotions.
The secondary aim of the study was to explore the preliminary impact of the attention retraining intervention on more general outcome measures, including social skills. The outcomes indicated decreases in:
- Social impairment for two participants
- Problem behaviors for three participants
- Alexithymia, the inability to identify and describe their emotions, for two participants
Parents’ assessment showed a greater impact, with four parents rating their children as increasing in emotion recognition and social skills over the 10 weeks. Additionally, six parents noted a decrease in behavioral problems over the course of the study.
Last, Trubanova reported that change in gaze to socially relevant cues was not significant from pre- to post-intervention or during the treatment session, indicating that the change in facial emotion recognition reported by the parents cannot be attributed to changes in viewing patterns (i.e., greater looking at the faces).
Trubanova noted in her study summary that the next step should be larger studies of the intervention in order to more definitively determine if the intervention has the desired impact on children’s ability to better recognize facial emotions.
Music as a Means to Better Recognition of Emotions
Research studies have suggested that children with ASD can accurately identify emotions through instrumental music excerpts. Sivathasan’s study set out to further explore that ability using children with a range of cognitive abilities.
Twenty-three adolescents with ASD between the ages of 12 and 17 with varying levels of cognitive skills had participated in this ongoing study at the time she reported results to OAR. Those participants completed two emotion recognition tasks during which they were asked to identify basic emotions (i.e., happy, sad, or fearful) conveyed in facial expressions displayed on a computer screen as well as in short musical clips.
Although children with ASD have been shown to have difficulty in identifying facial emotions, the adolescents in Sivathasan’s study showed high rates of accuracy and speed in identifying emotions in facial expression and music, regardless of their level of cognitive functioning. The findings also suggest that including music as a means of developing and teaching emotion recognition skills may be effective.
Greater emotion recognition accuracy also appears to be related to having fewer difficulties with social cognition (when identifying facial expressions of emotion) and social awareness (when identifying music-evoked emotions). Teachers reported students who exhibited fewer restrictive or repetitive behaviors in the classroom as being better at recognizing emotions through facial expression and music.
The next steps in the ongoing study will be to compare these findings with younger children with ASD, as well as with typically developing children, to better understand the overall development of these skills across development and ability level.
Findings from this study may contribute to the advancement of science in the area of educational psychology and music education and will provide insight into adapting curricula to promote effective emotional, social, and communication skills learning in children with ASD.