Skip to content

yichun-hub/SOLOv2_Road-Markings

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

SOLOv2_Road-Markings

1. Converts the format of VGG Image Annotator (VIA) to the COCO format.

Classes:straight arrow, left arrow, right arrow, straight left arrow, straight right arrow, pedestrian crossing, special lane

2. SOLOv2 environment

  • Operating System: Ubuntu 20.04.4
  • GPU: NVIDIA GeForce RTX3090
  • CUDA 11.1
  • pytorch 1.8.0
  • torchvision 0.9.0
  • python 3.7.13

3. Create my_dataset.py

Create a python file for the classes of the custom dataset in mmdet/datasets

from .coco import CocoDataset
from .registry import DATASETS

@DATASETS.register_module
class MyDataset(CocoDataset):
    CLASSES = ['straight arrow', 'left arrow', 
    'right arrow', 'straight left arrow', 'straight right arrow', 
    'pedestrian crossing', 'special lane']

Add the dataset in the mmdet/datasets/__init__.py

4. Modify solov2_r101_fpn_8gpu_3x.py

Backbone: ResNet101+FPN

5. Training

  python tools/train.py configs/solov2/solov2_r101_fpn_8gpu_3x.py

6. Evaluation

  python tools/test_ins.py configs/solov2/solov2_r101_fpn_8gpu_3x.py weights/homo_model_2/epoch_100.pth --show --out results_solo.pkl --eval segm

7. Visulization

The class_namesshould be modified.

  python tools/test_ins_vis.py configs/solov2/solov2_r101_fpn_8gpu_3x.py weights/homo_model_2/latest.pth --show --save_dir  work_dirs/val_homo_2data

8. Result

Train with the bird's eye view image model

Front View image

Bird's eye view

image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages