Research on Learners' Emotion Recognition Method in Teaching Environment

Minghan Wang

Article ID: 3685
Vol 6, Issue 5, 2023

VIEWS - 101 (Abstract) 63 (PDF)

Abstract


In this study, the authors propose a method that combines CNN and LSTM networks to recognize facial expressions. To handle illumination changes and preserve edge information in the image, the method uses two different preprocessing techniques. The preprocessed image is then fed into two independent CNN layers for feature extraction. The extracted features are then fused with an LSTM layer to capture the temporal dynamics of facial expressions. To evaluate the method's performance, the authors use the FER2013 dataset, which contains over 35,000 facial images with seven different expressions. To ensure a balanced distribution of the expressions in the training and testing sets, a mixing matrix is generated. The models in FER on the FER2013 dataset with an accuracy of 73.72%. The use of Focal loss, a variant of cross-entropy loss, improves the model's performance, especially in handling class imbalance. Overall, the proposed method demonstrates strong generalization ability and robustness to variations in illumination and facial expressions. It has the potential to be applied in various real-world applications such as emotion recognition in virtual assistants, driver monitoring systems, and mental health diagnosis.


Keywords


Emotion Recognition; Teaching Environment; Facial Expressions

Full Text:

PDF


References


1. Zahara, L., Musa, P., Wibowo, EP, Karim, I., & Musa, SB (2020, November). The facial emotion recognition (FER-2013) dataset for prediction system of micro-expressions face using the convolutional neural network (CNN) algorithm based Raspberry Pi. In 2020 Fifth international conference on informatics and computing (ICIC) (pp. 1-9). IEEE.

2. Akhand, MAH, Roy, S., Siddique, N., Kamal, MAS, & Shimamura, T. (2021). Facial emotion recognition using transfer learning in the deep CNN. Electronics, 10 ( 9 ), 1036.

3. Shahabinejad, M., Wang, Y., Yu, Y., Tang, J., & Li, J. (2021, December). Toward personalized emotion recognition: A face recognition based attention method for facial emotion recognition. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) (pp. 1-5). IEEE.

4. Borgalli, MRA, & Surve, S. (2022, March). Deep learning for facial emotion recognition using custom CNN architecture. In Journal of Physics: Conference Series (Vol. 2236, No. 1, p. 012004). IOP Publishing.

5. Altaher, A., Salekshahrezaee, Z., Abdollah Zadeh, A., Rafieipour, H., & Altaher, A. (2020). Using multi-inception CNN for face emotion recognition. Journal of Bioengineering Research, 3 (1), 1-12.

6. Nonis, F., Barbiero, P., Cirrincione, G., Olivetti, EC, Marcolin, F., & Vezzetti, E. (2021). Understanding abstraction in deep CNN: an application on facial emotion recognition. Progresses in Artificial Intelligence and Neural Systems, 281-290.

7. Ab Wahab, MN, Nazir, A., Ren, ATZ, Noor, MHM, Akbar, MF, & Mohamed, ASA (2021). Efficientnet -lite and hybrid CNN-KNN implementation for facial expression recognition on raspberry pi. IEEE Access, 9, 134065-134080.

8. Mellouk, W., & Handouzi, W. (2020). Facial emotion recognition using deep learning: review and insights. Procedia Computer Science, 175, 689-694.




DOI: https://doi.org/10.24294/ijmss.v6i5.3685

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Creative Commons License

This site is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.