Enhancing emotion recognition in controlled environments with YoLoNetv8
DOI:
https://doi.org/10.62110/sciencein.jist.2025.v13.1010Keywords:
YOLOv8, Deep Learning, Facial Expression Recognition, human-computer interaction, emotions, Machine LearningAbstract
Traditional facial expression recognition (FER) approaches for understanding human emotional signals have limitations such as preprocessing, feature extraction, and multi-stage classification, which require significant processing power and computational complexity. Nevertheless, an advanced object detection model like YOLOv8 model is favoured for its calaboose and accurateness. The proposed approach of facial emotion recognition in controlled environments employs the YOLOv8 model for enhancing the accuracy. The research employs a dataset of 21,263 images that are categorized into six emotions: There are six basic emotions; Joy, Sorrow, Anger, Disgust, Neutral, and Surprise. To enhance the models resilience, the images went through a preprocessing stage that included auto-orientation, magnification, rotation, and resizing. This dataset used to train the proposed model on a Kaggle T4 GPU and the model yielded satisfactory accuracy within emotions’ identification and categorization. The ability of the proposed model in real-time emotion detection was given by the assessment of the model’s performance by familiar parameters that include precision, recall, and mean Average Precision (mAP). This study utilizes various facial expressions and new participants to enhance human-computer interaction and mental health evaluation, contributing to the advancement of affective computing.
Downloads
Downloads
Published
Issue
Section
URN
License
Copyright (c) 2024 Ekta Singh, Parma Nand
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rights and Permission