“Facekit”—Toward an Automated Facial Analysis App Using a Machine Learning–Derived Facial Recognition Algorithm Academic Article uri icon

  • Overview
  • Research
  • Identity
  • Additional Document Info
  • View All


  • Introduction: Multiple tools have been developed for facial feature measurements and analysis using facial recognition machine learning techniques. However, several challenges remain before these will be useful in the clinical context for reconstructive and aesthetic plastic surgery. Smartphone-based applications utilizing open-access machine learning tools can be rapidly developed, deployed, and tested for use in clinical settings. This research compares a smartphone-based facial recognition algorithm to direct and digital measurement performance for use in facial analysis. Methods: Facekit is a camera application developed for Android that utilizes ML Kit, an open-access computer vision Application Programing Interface developed by Google. Using the facial landmark module, we measured 4 facial proportions in 15 healthy subjects and compared them to direct surface and digital measurements using intraclass correlation (ICC) and Pearson correlation. Results: Measurement of the naso-facial proportion achieved the highest ICC of 0.321, where ICC > 0.75 is considered an excellent agreement between methods. Repeated measures analysis of variance of proportion measurements between ML Kit, direct and digital methods, were significantly different ( F[2,14] = 6-26, P<<.05). Facekit measurements of orbital, orbitonasal, naso-oral, and naso-facial ratios had overall low correlation and agreement to both direct and digital measurements ( R<<0.5, ICC<<0.75). Conclusion: Facekit is a smartphone camera application for rapid facial feature analysis. Agreement between Facekit's machine learning measurements and direct and digital measurements was low. We conclude that the chosen pretrained facial recognition software is not accurate enough for conducting a clinically useful facial analysis. Custom models trained on accurate and clinically relevant landmarks may provide better performance.


  • Nachmani, Omri
  • Saun, Tomas
  • Huynh, Minh
  • Forrest, Christopher R
  • McRae, Mark

publication date

  • January 24, 2022