This repo contains a real-time facial emotion detection iOS application developed as a personal project. The app utilizes CoreML with SwiftUI to provide an intuitive and interactive user experience.
This project builds a convolutional neural network (CNN) model to classify facial expressions from pixel images of faces. A Notebook is provided to train a facial expression recognition model using transfer learning with VGGFace on the FER2013 dataset.
The dataset contains 35,887 grayscale 48x48 pixel face images with 7 emotions (angry, disgust, fear, happy, sad, surprise, neutral). The data is split into train, validation, and test sets.
A pre-trained VGGFace model was used for transfer learning. VGGFace is a convolutional neural network trained on a large facial image dataset for face recognition.
The base model was frozen and new classifier dense layers were added on top and trained for facial expression classification into 7 classes.
The model was trained for 30 epochs using the Adam optimizer and categorical cross-entropy loss. Test accuracy reached ~70%.
I also made a non VGGFace model here which achieved a worse accuracy of ~55%.
The model achieves ~70% accuracy on the test set. Analysis of the confusion matrix shows the model performs very well on the "happy" class with high precision and recall. It struggles more with "fear" and "sad" which are commonly misclassified as one another.
The model shows decent precision and recall for "angry", "disgust", "surprise" and "neutral" but still commonly confuses between some classes like "sad" and "fear".
Overall the model achieves reasonable but not excellent performance on this dataset. Some ways to potentially improve accuracy include tuning hyperparameters, training on more varied data, and exploring different model architectures.
To run the iOS application, follow these steps:
- Clone the repository to your local machine.
$ git clone https://github.com/andr3wV/CoreMLEmotionDetect.git
-
Open the project in Xcode.
-
Ensure you have the necessary dependencies installed. If any are missing, use CocoaPods or Swift Package Manager to install them.
-
Build and run the app on a connected iOS device or simulator.
If you train the model (not necessary for app), follow these steps:
- Install the notebook dependencies
$ pip install -r requirements.txt
-
Download the training data from this dataset
-
Open the Jupyter Notebook
facial_expression_recognition.ipynb
-
Run the notebook
-
Launch the application on your iOS device.
-
Grant necessary permissions for camera access.
-
Point the front camera at a human face.
Contributions to this project are welcome! If you find any bugs or have suggestions for improvements, please feel free to open an issue or submit a pull request. Also, if someone want to work on the UI, that would be greatly appreciated!