My project is using the TensorFlow framework to implement the MobilenetV1 model to classify flower images. Give me a star if you like this repo.
The MobileNet network architecture is shown below. The image below is excerpted from the author's original article
We see that the model has 30 layers with following characteristics:Layer 1: Convolution layer with stride size 2
Layer 2: Depthwise layer
Layer 3: Pointwise layer
Layer 4: Depthwise layer with stride size 2 (different from layer 2, layer 2 dw has stride size 1
Layer 5: Pointwise layer
Layer 30: Softmax, used for classification
The improvement point of the model is to use a convolution method called Depthwise Separable Convolution
to reduce the model size and reduce the computational complexity.
Depthwise separable convolution
is a depthwise convolution
followed by a pointwise convolution
as follows:
A small note about the architecture here, is that after each convolution MobileNet will use Batch Normalization (BN) and ReLU as shown below:
Standard Convolution on the left, Depthwise separable convolution with BN and ReLU on the right
- Github: Nguyendat-bit
- Email: nduc0231@gmail
- Facebook: Nguyễn Đạt
- Linkedin: Đạt Nguyễn Tiến
-
Step 1: Make sure you have installed Miniconda. If not yet, see the setup document here
-
Step 2:
cd
intoMobilenetV1
and use command line
conda env create -f environment.yml
- Step 3: Run conda environment using the command
conda activate MobilenetV1
- Download the data:
- Download dataset here
- Extract file and put folder
train
andvalidation
to./data
by using splitfolders
- train folder was used for the training process
- validation folder was used for validating training result after each epoch
This library use ImageDataGenerator API from Tensorflow 2.0 to load images. Make sure you have some understanding of how it works via its document
Structure of these folders in ./data
train/
...daisy/
......daisy0.jpg
......daisy1.jpg
...dandelion/
......dandelion0.jpg
......dandelion1.jpg
...roses/
......roses0.jpg
......roses1.jpg
...sunflowers/
......sunflowers0.jpg
......sunflowers1.jpg
...tulips/
......tulips0.jpg
......tulips1.jpg
validation/
...daisy/
......daisy2000.jpg
......daisy2001.jpg
...dandelion/
......dandelion2000.jpg
......dandelion2001.jpg
...roses/
......roses2000.jpg
......roses2001.jpg
...sunflowers/
......sunflowers2000.jpg
......sunflowers2001.jpg
...tulips/
......tulips2000.jpg
......tulips2001.jpg
Review training on colab:
Training script:
python train.py --train-folder ${link_to_train_folder} --valid-folder ${link_to_valid_folder} --classes ${num_classes} --epochs ${epochs}
Example:
python train.py --train-folder ./data/train --valid-folder ./data/val --classes 5 --epochs 100
There are some important arguments for the script you should consider when running it:
train-folder
: The folder of training datavalid-folder
: The folder of validation dataMobilenetv1-folder
: Where the model after training savedclasses
: The number of your problem classes.batch-size
: The batch size of the datasetlr
: The learning ratedroppout
: The droppoutlabel-smoothing
: The label smoothingimage-size
: The image size of the datasetalpha
: Width Multiplier. It was mentioned in the paper on page 4rho
: Resolution Multiplier, It was mentioned in the paper on page 4
If you want to test your single image, please run this code:
python predict.py --test-file ${link_to_test_image}
My implementation
Epoch 00097: val_acc improved from 0.87534 to 0.87805, saving model to MobilenetV1
Epoch 98/100
207/207 [==============================] - 46s 220ms/step - loss: 0.2158 - acc: 0.9421 - val_loss: 0.4410 - val_acc: 0.8862
Epoch 00098: val_acc improved from 0.87805 to 0.88618, saving model to MobilenetV1
Epoch 99/100
207/207 [==============================] - 45s 217ms/step - loss: 0.1981 - acc: 0.9488 - val_loss: 0.4763 - val_acc: 0.8753
Epoch 00099: val_acc did not improve from 0.88618
Epoch 100/100
207/207 [==============================] - 45s 218ms/step - loss: 0.2038 - acc: 0.9470 - val_loss: 0.4322 - val_acc: 0.8726
If you meet any issues when using this library, please let us know via the issues submission tab.