Home
Scholarly Works
Coding infant engagement in the Face-to-Face...
Journal article

Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks

Abstract

BACKGROUND: The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters. METHODS: 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect. RESULTS: Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993. LIMITATIONS: This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample. CONCLUSIONS: Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research.

Authors

Faltyn M; Krzeczkowski JE; Cummings M; Anwar S; Zeng T; Zahid I; Ntow KO-B; Van Lieshout RJ

Journal

Infant Behavior and Development, Vol. 71, ,

Publisher

Elsevier

Publication Date

May 1, 2023

DOI

10.1016/j.infbeh.2023.101827

ISSN

0163-6383

Contact the Experts team