EEG & Computer Vision based Facial Expression Recognition
-
Make sure that you are using python version 3.6. This project may work for other versions of python but it was tested on version 3.6.8 (64-bit)
-
Install all python dependencies using
[path to python executable] -m pip install -r requirements.txt
Model Runner Usage:
[path to python executable] modelRunner.py [-n, --modelName] defaultcv [-l, --labelFrequency] 2 [-f, --dataFrequency] 2 [-d, --dataPath] video_or_eegfilename
Example commands:
python modelRunner.py -n defaulteeg -l 2 -f 128 -d s01_trial01.dat
python modelRunner.py -n defaultcv -l 2 -d s01_trial01.mp4
Note that the cv models do not use the dataFrequency argument.
Output generated by the models will be placed in the ./output directory
This application currently supports a defualt EEG, random EEG, default CV, and random CV model for use. The default EEG model was trained from the DEAP dataset. This dataset features predominantly white participants in Europe. Use with caution as the model may be biased to the training participant's demographics.
Custom models should inherit the AbstractModel class defined in models/interface.py. It should intialized the weights of the model in the constructor and develop the json outputs on the run function.
CV models can be called with an optional sampleRate argument set to 1 on default
EEG models can be called with an optional sampleRate set to 1 on default and dataFrequency argument set to 128 on default
If using a custom EEG model, make sure to still use the input format of the files listed below. The EEG power graph also corresponds to the input format specified below so make sure to change that as well if changing the input format in any way.
The input for the CV model has to be a .mp4 file and can be of any framerate
The input for the EEG model has to be a .dat file with these specifications:
- The array extracted from the .dat file should be in the format [channels, signals] with the shape of the array being [# channels, total time of recording * dataFrequency]
- Their has to be atleast 8 signals passed in for each channel (total time of recording * dataFrequency >= 8)
- The default dataFrequency is 128 but can be specified when calling modelRunner.py
- The data the model was trained on was downsampled to 128 and bandpass filtered from 4-45hz
- Artifacts should be removed ahead of time for optimal results
Finally make sure that the cv model and eeg data are the same length and synched up if using the combined model
-
Navigate to the gui folder
-
Run python server.py under your 3.6.8 venv
-
Go to localhost:5000 in your browser
-
Can run the models individually or together
-
Can specify specific model and model frequency