Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • BACKGROUND: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters. METHODS: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect. RESULTS: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993. LIMITATIONS: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample. CONCLUSIONS: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research.

authors

  • Faltyn, Mateusz
  • Krzeczkowski, John E
  • Cummings, Mike
  • Anwar, Samia
  • Zeng, Tammy
  • Zahid, Isra
  • Ntow, Kwadjo Otu-Boateng
  • Van Lieshout, Ryan

publication date

  • May 2023