This project utilizes the FER-2013 dataset to recognize facial emotions. It consists of two main components:
- Real-Time Recognition: Real-time detection and recognition of facial emotions.
- Input-Based Recognition: Allows users to select an image or video file from their system for emotion recognition.
The FER-2013 dataset is a publicly available dataset containing grayscale images of faces labeled with one of seven emotions: angry, disgust, fear, happy, sad, surprise, or neutral. You can find more information and download the dataset here.
The model used in this project achieves an accuracy of approximately 65%. You can download the model from here.
- Navigate to the
realtime
directory : Real-Time - Run the real-time recognition script.
- The script will start a real-time video feed, detecting and recognizing emotions in faces.
- Navigate to the
input_based
directory : Input-Based - Run the input-based recognition script.
- Follow the instructions to select an image or video file from your system for emotion recognition.
Please note that the project repository contains personal test images and videos. These are intended for demonstration purposes only and should not be used for any other works without proper permission.
For more details about the project, including methodology, results, and analysis, please refer to the Project Report.
This project is licensed under the CC0 1.0 Universal (CC0 1.0) License. See the LICENSE.txt file for details.
Contributions to this project are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.