Facial Expression Recognition using Convolutional Neural Networks with Transfer Learning Resnet-50

Abstract

Facial expression recognition is important for many applications, including sentiment analysis, human-computer interaction, and interactive systems in areas such as security, healthcare, and entertainment. However, this task is fraught with challenges, mainly due to large differences in lighting conditions, viewing angles, and differences in individual eye structures. These factors can drastically affect the appearance of facial expressions, making it difficult for traditional recognition systems to consistently and accurately identify emotions. Variations in lighting can alter the visibility of facial features, while different angles can obscure critical details necessary for accurate expression detection. This study addresses these issues by employing transfer learning with ResNet-50 and effective pre-processing techniques. The dataset consists of grayscale images with a 48 x 48 pixels resolution. It includes a total of 680 samples categorized into seven classes: anger, contempt, disgust, fear, happy, sadness, and surprise. The dataset was divided so that 80% was allocated for training and 20% for testing to ensure robust model evaluation. The results demonstrate that the model utilizing transfer learning achieved an exceptional performance level, with accuracy at 99.49%, precision at 99.49%, recall at 99.71%, and an F1-score of 99.60%, significantly outperforming the model without transfer learning. Future research will focus on implementing real-time facial recognition systems and exploring other advanced transfer learning models to further enhance accuracy and operational efficiency.

Author Biographies

Christy Atika Sari, Universitas Dian Nuswantoro

Christy Atika Sari received the Master's degree in Informatic Engineering from Dian Nuswantoro University and University Teknikal Malaysia Melaka (UTeM) in 2012. She is currently active as an author and reviewer of an international journal and conference Scopus indexed. She was also awarded as best author and best paper at a national and international conference in 2019 and 2020 respectively and was awarded by the Indonesian Ministry of Education and Culture Research and Technology as the Indonesian top 50 best researchers in 2020. She’s research interest is quantum computing for security data, machine learning, deep learning, and image processing. She can be contacted at email: [email protected].

Eko Hari Rachmawanto, Universitas Dian Nuswantoro

Eko Hari Rachmawanto received a bachelor's degree in Informatic Engineering from the University of Dian Nuswantoro, in 2010. He received a master's double degree University of Dian Nuswantoro and Universiti Teknikal Malaysia Malacca (UTeM) in 2010 and 2012. Since 2012, he joined as a lecturer in Informatics Engineering at, the University of Dian Nuswantoro, Semarang, Indonesia. Now he serves as Editor in Chief of Accredited Indonesian National Journal. Since 2022 he has supervised the informatics engineering study program in study programs outside the main campus as head of the study program in Kediri, Indonesia. Now he is a member of Security Data Collaboration Research and tasked with developing several researchers regarding data security, and image processing. He can be contacted at email: [email protected].

References

[1] A. A. Khan, A. A. Laghari, and S. A. Awan, “Machine Learning in Computer Vision: A Review,” EAI Endorsed Transactions on Scalable Information Systems, vol. 8, no. 32, pp. 1–11, 2021, doi: 10.4108/eai.21-4-2021.169418.
[2] A. F. A. Fernandes, J. R. R. Dórea, and G. J. de M. Rosa, “Image Analysis and Computer Vision Applications in Animal Sciences: An Overview,” Oct. 21, 2020, Frontiers Media S.A. doi: 10.3389/fvets.2020.551269.
[3] W. Sterkens et al., “Computer vision and optical character recognition for the classification of batteries from WEEE,” in Procedia CIRP, Elsevier B.V., 2022, pp. 110–115. doi: 10.1016/j.procir.2022.02.019.
[4] V. Kakani, V. H. Nguyen, B. P. Kumar, H. Kim, and V. R. Pasupuleti, “A critical review on computer vision and artificial intelligence in food industry,” Dec. 01, 2020, Elsevier B.V. doi: 10.1016/j.jafr.2020.100033.
[5] D. Bhatt et al., “Cnn variants for computer vision: History, architecture, application, challenges and future scope,” Oct. 01, 2021, MDPI. doi: 10.3390/electronics10202470.
[6] J. Gupta, S. Pathak, and G. Kumar, “Deep Learning (CNN) and Transfer Learning: A Review,” in Journal of Physics: Conference Series, Institute of Physics, 2022. doi: 10.1088/1742-6596/2273/1/012029.
[7] M. Ahmad, S. Abbas, A. Fatima, G. F. Issa, T. M. Ghazal, and M. A. Khan, “Deep Transfer Learning-Based Animal Face Identification Model Empowered with Vision-Based Hybrid Approach,” Applied Sciences (Switzerland), vol. 13, no. 2, Jan. 2023, doi: 10.3390/app13021178.
[8] E. A. Aldakheel, M. Zakariah, G. A. Gashgari, F. A. Almarshad, and A. I. A. Alzahrani, “A Deep Learning-Based Innovative Technique for Phishing Detection in Modern Security with Uniform Resource Locators,” Sensors, vol. 23, no. 9, May 2023, doi: 10.3390/s23094403.
[9] K. Kuru, S. Clough, D. Ansell, J. McCarthy, and S. McGovern, “WILDetect: An intelligent platform to perform airborne wildlife census automatically in the marine ecosystem using an ensemble of learning techniques and computer vision,” Expert Syst Appl, vol. 231, p. 120574, Nov. 2023, doi: 10.1016/j.eswa.2023.120574.
[10] O. A. Montesinos López, A. Montesinos López, and J. Crossa, Multivariate Statistical Machine Learning Methods for Genomic Prediction. Springer International Publishing, 2022. doi: 10.1007/978-3-030-89010-0.
[11] K. M. O. Nahar et al., “Recognition of Arabic Air-Written Letters: Machine Learning, Convolutional Neural Networks, and Optical Character Recognition (OCR) Techniques,” Preprints (Basel), Sep. 2023, doi: 10.20944/preprints202309.1806.v1.
[12] V. Maheswari, C. A. Sari, D. R. I. M. Setiadi, and E. H. Rachmawanto, “Study Analysis of Human Face Recognition using Principal Component Analysis,” in 2020 International Seminar on Application for Technology of Information and Communication (iSemantic), 2020, pp. 55–60. doi: 10.1109/iSemantic50169.2020.9234250.
[13] I. William, D. R. I. M. Setiadi, E. H. Rachmawanto, H. A. Santoso, and C. A. Sari, “Face Recognition using FaceNet (Survey, Performance Test, and Comparison),” in 2019 Fourth International Conference on Informatics and Computing (ICIC), 2019, pp. 1–6. doi: 10.1109/ICIC47613.2019.8985786.
[14] I. U. W. Mulyono, D. R. I. M. Setiadi, A. Susanto, E. H. Rachmawanto, A. Fahmi, and Muljono, “Performance Analysis of Face Recognition using Eigenface Approach,” in 2019 International Seminar on Application for Technology of Information and Communication (iSemantic), 2019, pp. 1–5. doi: 10.1109/ISEMANTIC.2019.8884225.
[15] D. del-Pozo-Bueno, D. Kepaptsoglou, F. Peiró, and S. Estradé, “Comparative of machine learning classification strategies for electron energy loss spectroscopy: Support vector machines and artificial neural networks,” Ultramicroscopy, vol. 253, Nov. 2023, doi: 10.1016/j.ultramic.2023.113828.
[16] N. R. D. Cahyo, C. A. Sari, E. H. Rachmawanto, C. Jatmoko, R. R. A. Al-Jawry, and M. A. Alkhafaji, “A Comparison of Multi Class Support Vector Machine vs Deep Convolutional Neural Network for Brain Tumor Classification,” in 2023 International Seminar on Application for Technology of Information and Communication (iSemantic), IEEE, Sep. 2023, pp. 358–363. doi: 10.1109/iSemantic59612.2023.10295336.
[17] M. M. I. Al-Ghiffary, C. A. Sari, E. H. Rachmawanto, N. M. Yacoob, N. R. D. Cahyo, and R. R. Ali, “Milkfish Freshness Classification Using Convolutional Neural Networks Based on Resnet50 Architecture,” Advance Sustainable Science Engineering and Technology, vol. 5, no. 3, p. 0230304, Oct. 2023, doi: 10.26877/asset.v5i3.17017.
[18] Y. A. Nisa, C. A. Sari, E. H. Rachmawanto, and N. Mohd Yaacob, “Ambon Banana Maturity Classification Based On Convolutional Neural Network (CNN),” sinkron, vol. 8, no. 4, pp. 2568–2578, Oct. 2023, doi: 10.33395/sinkron.v8i4.12961.
[19] N. R. D. Cahyo and M. M. I. Al-Ghiffary, “An Image Processing Study: Image Enhancement, Image Segmentation, and Image Classification using Milkfish Freshness Images,” IJECAR) International Journal of Engineering Computing Advanced Research, vol. 1, no. 1, pp. 11–22, 2024.
[20] F. Farhan, C. A. Sari, E. H. Rachmawanto, and N. R. D. Cahyo, “Mangrove Tree Species Classification Based on Leaf, Stem, and Seed Characteristics Using Convolutional Neural Networks with K-Folds Cross Validation Optimalization,” Advance Sustainable Science Engineering and Technology, vol. 5, no. 3, p. 02303011, Oct. 2023, doi: 10.26877/asset.v5i3.17188.
[21] I. P. Kamila, C. A. Sari, E. H. Rachmawanto, and N. R. D. Cahyo, “A Good Evaluation Based on Confusion Matrix for Lung Diseases Classification using Convolutional Neural Networks,” Advance Sustainable Science, Engineering and Technology, vol. 6, no. 1, p. 0240102, Dec. 2023, doi: 10.26877/asset.v6i1.17330.
[22] R. Singh, S. Saurav, T. Kumar, R. Saini, A. Vohra, and S. Singh, “Facial expression recognition in videos using hybrid CNN & ConvLSTM,” International Journal of Information Technology (Singapore), vol. 15, no. 4, pp. 1819–1830, Apr. 2023, doi: 10.1007/s41870-023-01183-0.
[23] M. N. Ab Wahab, A. Nazir, A. T. Z. Ren, M. H. M. Noor, M. F. Akbar, and A. S. A. Mohamed, “Efficientnet-Lite and Hybrid CNN-KNN Implementation for Facial Expression Recognition on Raspberry Pi,” IEEE Access, vol. 9, pp. 134065–134080, 2021, doi: 10.1109/ACCESS.2021.3113337.
[24] S. Rajan, P. Chenniappan, S. Devaraj, and N. Madian, “Novel deep learning model for facial expression recognition based on maximum boosted CNN and LSTM,” IET Image Process, vol. 14, no. 7, pp. 1227–1232, May 2020, doi: 10.1049/iet-ipr.2019.1188.
[25] J. Oheix, “Face Expression Recognition Datasets.” Accessed: Aug. 20, 2024. [Online]. Available: https://www.kaggle.com/datasets/jonathanoheix/face-expression-recognition-dataset
[26] M. A. Khayer, M. S. Hasan, and A. Sattar, “Arabian date classification using CNN algorithm with various pre-trained models,” in Proceedings of the 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks, ICICV 2021, Institute of Electrical and Electronics Engineers Inc., Feb. 2021, pp. 1431–1436. doi: 10.1109/ICICV50876.2021.9388413.
[27] Q. A. Putra, C. A. Sari, E. H. Rachmawanto, N. R. D. Cahyo, E. Mulyanto, and M. A. Alkhafaji, “White Bread Mold Detection using K-Means Clustering Based on Grey Level Co-Occurrence Matrix and Region of Interest,” in 2023 International Seminar on Application for Technology of Information and Communication (iSemantic), 2023, pp. 376–381. doi: 10.1109/iSemantic59612.2023.10295369.
[28] A. A. Mohammmed, E. Elbasi, and O. M. Alsaydia, “An Adaptive Robust Semi-blind Watermarking in Transform Domain Using Canny Edge Detection Technique,” in 2021 44th International Conference on Telecommunications and Signal Processing, TSP 2021, Institute of Electrical and Electronics Engineers Inc., Jul. 2021, pp. 10–14. doi: 10.1109/TSP52935.2021.9522657.
[29] U. A. Nnolim, “Automated crack segmentation via saturation channel thresholding, area classification and fusion of modified level set segmentation with Canny edge detection,” Heliyon, vol. 6, no. 12, Dec. 2020, doi: 10.1016/j.heliyon.2020.e05748.
[30] S. N. Shivappriya, M. J. P. Priyadarsini, A. Stateczny, C. Puttamadappa, and B. D. Parameshachari, “Cascade object detection and remote sensing object detection method based on trainable activation function,” Remote Sens (Basel), vol. 13, no. 2, p. 200, 2021.
[31] A. Luo, X. Li, F. Yang, Z. Jiao, H. Cheng, and S. Lyu, “Cascade graph neural networks for RGB-D salient object detection,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16, Springer, 2020, pp. 346–364.
[32] X. X. Li, D. Li, W. X. Ren, and J. S. Zhang, “Loosening Identification of Multi-Bolt Connections Based on Wavelet Transform and ResNet-50 Convolutional Neural Network,” Sensors, vol. 22, no. 18, Sep. 2022, doi: 10.3390/s22186825.
[33] G. S. Nugraha, M. I. Darmawan, and R. Dwiyansaputra, “Comparison of CNN’s Architecture GoogleNet, AlexNet, VGG-16, Lenet -5, Resnet-50 in Arabic Handwriting Pattern Recognition,” Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, May 2023, doi: 10.22219/kinetik.v8i2.1667.
[34] S. C. Kim and Y. S. Cho, “Predictive System Implementation to Improve the Accuracy of Urine Self-Diagnosis with Smartphones: Application of a Confusion Matrix-Based Learning Model through RGB Semiquantitative Analysis,” Sensors, vol. 22, no. 14, Jul. 2022, doi: 10.3390/s22145445.
Published
2024-08-29
How to Cite
ISTIQOMAH, Annisa Ayu et al. Facial Expression Recognition using Convolutional Neural Networks with Transfer Learning Resnet-50. JOURNAL OF APPLIED INFORMATICS AND COMPUTING, [S.l.], v. 8, n. 2, p. 257-264, aug. 2024. ISSN 2548-6861. Available at: <http://704209.wb34atkl.asia/index.php/JAIC/article/view/8329>. Date accessed: 28 nov. 2024. doi: https://doi.org/10.30871/jaic.v8i2.8329.
Section
Articles

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.