A framework for facial expression recognition using deep self-attention network
Facial expression recognition (FER) is a widely used technique for emotion recognition. In
recent years, numerous deep convolutional neural network (CNN) models have been
implemented for this purpose. However, CNN is not capable of representing the most
relevant parts of the input data, and moreover, the existing deep models do not perform well
on small datasets and cannot deal with intra-class variation and inter-class similarity.
Therefore, in this work, we address these issues by proposing a deep learning framework for …
recent years, numerous deep convolutional neural network (CNN) models have been
implemented for this purpose. However, CNN is not capable of representing the most
relevant parts of the input data, and moreover, the existing deep models do not perform well
on small datasets and cannot deal with intra-class variation and inter-class similarity.
Therefore, in this work, we address these issues by proposing a deep learning framework for …
Abstract
Facial expression recognition (FER) is a widely used technique for emotion recognition. In recent years, numerous deep convolutional neural network (CNN) models have been implemented for this purpose. However, CNN is not capable of representing the most relevant parts of the input data, and moreover, the existing deep models do not perform well on small datasets and cannot deal with intra-class variation and inter-class similarity. Therefore, in this work, we address these issues by proposing a deep learning framework for FER using self-attention and data augmentation. The proposed self-attention model addresses intra-class variation and inter-class similarity issues, whereas the data augmentation technique improves model performance by increasing the size of smaller datasets and avoiding overfitting. The proposed model handles both posed and spontaneous expressions and has been tested on the JAFFE, CK + , RAF, FER2013, MUG, and YALE datasets. A series of experiments have been conducted with and without self-attention to validate our approach. Furthermore, we have used the sigmoid activation function in the self-attention mechanism to improve the performance of the proposed deep learning model. Experimental results show that the classification performance of a deep learning model is improved by incorporating the proposed self-attention with sigmoid activation function and data augmentation technique. Comparative analysis using quantitative evaluation metrics (precision, recall, F1-score, and accuracy) shows that the proposed method works better than existing machine learning and deep learning methods.
Springer
Showing the best result for this search. See all results