Skip to content

GiMeFive: Towards Interpretable Facial Emotion Classification πŸ˜„πŸ˜²πŸ˜­πŸ˜‘πŸ€’πŸ˜¨ (PyTorch Implementation)

Notifications You must be signed in to change notification settings

werywjw/SEP-CVDL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

GiMeFiveβœ‹: Towards Interpretable Facial Emotion Classification

Abstract

Deep convolutional neural networks have been shown to successfully recognize facial emotions for the past years in the realm of computer vision. However, the existing detection approaches are not always reliable or explainable, we here propose our model GiMeFive with interpretations, i.e., via layer activations and gradient-weighted class activation mapping. We compare against the state-of-the-art methods to classify the six facial emotions. Empirical results show that our model outperforms the previous methods in terms of accuracy on two Facial Emotion Recognition (FER) benchmarks and our aggregated FER GiMeFive. Furthermore, we explain our work in real-world image and video examples, as well as real-time live streams.

Overview

Results

Requirements

A detailed list of requirements can be found in requirements.txt.

pip install -r requirements.txt
python3 -m pip install -r requirements.txt

How to run the script to get the CSV file of classification scores?

Please change the filepath to the image folder that you would like to try. πŸ˜„

  • Option 1️⃣: Run the following command:
python3 script_csv.py 'data/valid'
  • Option 2️⃣: Use shell:
./script_csv.sh

How to run the script to generate an XAI video from a given video path or a camera stream? πŸ₯Ή πŸ“Ή

Run one of the following commands:

python3 script_video.py -h

python3 script_video.py -s camera
python3 script_video.py -s video -i 'video/video.mp4'

How do our videos look like?

How to test the live demo with camera (old version)?

Run the following command:

python3 eval_livecam.py

Exit via control + C.

Please kindly cite if you find this repository helpful 😸

@article{wang2024gimefive,
  title={GiMeFive: Towards Interpretable Facial Emotion Classification},
  author={Wang, Jiawen and Kawka, Leah},
  journal={arXiv preprint arXiv:2402.15662},
  year={2024}
}

Q&A

Where is our paper (final report) πŸ“„?

The PDF Version is here.

Where are our presentation slides πŸŽ₯?

On goolge docs. The shared link is available here. The PDF Version is here.

Where is our proposal (preliminary report) πŸ“ƒ?

The PDF Version is here.

What are the emotion labels?

πŸ˜„ happiness (0)

😲 surprise (1)

😭 sadness (2)

😑 anger (3)

🀒 disgust (4)

😨 fear (5)

How to generate a requirements.txt?

pipreqs /Users/wery/Desktop/SEP-CVDL
pip3 freeze > requirements.txt

How to resolve the conflicts?

git status

See what is asked to be modified, then:

git add .
git pull 
git config pull.rebase false
git pull 
git commit
git push

How to coauthor?

git add .
git commit -m " <bababa>


Co-authored-by: leahkawka <[email protected]>
Co-authored-by: werywjw <[email protected]>"
git push origin main

How to convert .ipynb to .py?

colab-convert livecam.ipynb livecam.py -nc -rm -o

How to convert .mp4 to .gif?

Go to ezgif webpage.

How to create submodules?

git submodule add https://github.com/werywjw/data.git

How to create the virtual environment?

source /Users/wery/venv/bin/activate