Deep convolutional neural networks have been shown to successfully recognize facial emotions for the past years in the realm of computer vision. However, the existing detection approaches are not always reliable or explainable, we here propose our model GiMeFive with interpretations, i.e., via layer activations and gradient-weighted class activation mapping. We compare against the state-of-the-art methods to classify the six facial emotions. Empirical results show that our model outperforms the previous methods in terms of accuracy on two Facial Emotion Recognition (FER) benchmarks and our aggregated FER GiMeFive. Furthermore, we explain our work in real-world image and video examples, as well as real-time live streams.
A detailed list of requirements can be found in requirements.txt
.
pip install -r requirements.txt
python3 -m pip install -r requirements.txt
Please change the filepath to the image folder that you would like to try. π
- Option 1οΈβ£: Run the following command:
python3 script_csv.py 'data/valid'
- Option 2οΈβ£: Use shell:
./script_csv.sh
How to run the script to generate an XAI video from a given video path or a camera stream? π₯Ή πΉ
Run one of the following commands:
python3 script_video.py -h
python3 script_video.py -s camera
python3 script_video.py -s video -i 'video/video.mp4'
Run the following command:
python3 eval_livecam.py
Exit via control
+ C
.
@article{wang2024gimefive,
title={GiMeFive: Towards Interpretable Facial Emotion Classification},
author={Wang, Jiawen and Kawka, Leah},
journal={arXiv preprint arXiv:2402.15662},
year={2024}
}
The PDF Version is here.
On goolge docs. The shared link is available here. The PDF Version is here.
The PDF Version is here.
π happiness (0)
π² surprise (1)
π sadness (2)
π‘ anger (3)
π€’ disgust (4)
π¨ fear (5)
pipreqs /Users/wery/Desktop/SEP-CVDL
pip3 freeze > requirements.txt
git status
See what is asked to be modified, then:
git add .
git pull
git config pull.rebase false
git pull
git commit
git push
git add .
git commit -m " <bababa>
Co-authored-by: leahkawka <[email protected]>
Co-authored-by: werywjw <[email protected]>"
git push origin main
colab-convert livecam.ipynb livecam.py -nc -rm -o
Go to ezgif webpage.
git submodule add https://github.com/werywjw/data.git
source /Users/wery/venv/bin/activate