Skip to content

ZAP-FYP/YOLOPv2-1D_Coordinates

Repository files navigation

Video splitting - ffmpeg -i data/zafra-videos/IMG_0263.MOV -c copy -map 0 -segment_time 30 -f segment data/videos-long/chunks/IMG_0263.MOV/output_video_%03d.mp4 Start segmentation - python demo.py --source data/example.jpg --device cpu bdd kaggle.josn {"username":"uom190055f","key":"11a39bb923bb951f08f52f78167605ab"} {"username":"uom190055f","key":"f356d3e522ef48d6f7b6704a8bb747ae"} model weights https://drive.google.com/file/d/1ggqh1Wc1T9zY4zN9BY4p-mTyq_dDtdEv/view?usp=sharing

To install Chrome:

download it using this command: wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb

execute the downloaded installer: sudo apt install ./google-chrome-stable_current_amd64.deb

launch the browser: google-chrome

Later, I decided to make the default browser icon to launch google chrome, so I followed Grant Curell's answer, basically:

run xfce4-settings-manager find "Preferred Applications" under "Web Browser", click "Other..." type in /usr/bin/google-chrome

Drive folder FYP link https://drive.google.com/drive/folders/14tARLTnBGZKw-40RXfvOdeDGFLDtzx5M?usp=sharing

Regular sampling https://drive.google.com/file/d/1oUCwcqInKR5KcQh1lDvIJQLLnXV5kA60/view?usp=sharing DMS https://dms.uom.lk/s/HJmKfQgnB8LfyrW

YOLOPv2:rocket:: Better, Faster, Stronger for Panoptic driving Perception

Cheng Han, Qichao Zhao, Shuyi Zhang, Yinzi Chen, Zhenlin Zhang, Jinwei Yuan

News

  • August 30, 2022: We've released the inference code / trained model and published Hugging Face Spacesweb demo, just enjoy it !

  • August 24, 2022: We've released the tech report for YOLOPv2. This work is still in progress and code/models are coming soon. Please stay tuned! ☕️

Introduction

😁We present an excellent multi-task network based on YOLOP💙,which is called YOLOPv2: Better, Faster, Stronger for Panoptic driving Perception. The advantages of YOLOPv2 can be summaried as below:

  • Better👏: we proposed the end-to-end perception network which possess better feature extraction backbone, better bag-of-freebies were developed for dealing with the training process.
  • Faster✈️: we employed more efficient ELAN structures to achieve reasonable memory allocation for our model.
  • Stronger💪: the proposed model has stable network design and has powerful robustness for adapting to various scenarios .

PWC PWC PWC

Results

We used the BDD100K as our datasets,and experiments are run on NVIDIA TESLA V100.

Web Demo

Visualization

model : trained on the BDD100k dataset and test on T3CAIC camera.

Model parameter and inference speed

Model Size Params Speed (fps)
YOLOP 640 7.9M 49
HybridNets 640 12.8M 28
YOLOPv2 640 38.9M 91 (+42)

Traffic Object Detection Result

Result Visualization
Model mAP@0.5 (%) Recall (%)
MultiNet 60.2 81.3
DLT-Net 68.4 89.4
Faster R-CNN 55.6 77.2
YOLOv5s 77.2 86.8
YOLOP 76.5 89.2
HybridNets 77.3 92.8
YOLOPv2 83.4(+6.1) 91.1(-1.7) ⬇️

Drivable Area Segmentation

Result Visualization
Model Drivable mIoU (%) ——:relaxed:——
MultiNet 71.6
DLT-Net 71.3
PSPNet 89.6
YOLOP 91.5
HybridNets 90.5
YOLOPv2 93.2(+1.7) ⬆️

Lane Line Detection

Result Visualization
Model Accuracy (%) Lane Line IoU (%)
Enet 34.12 14.64
SCNN 35.79 15.84
Enet-SAD 36.56 16.02
YOLOP 70.5 26.2
HybridNets 85.4 31.6
YOLOPv2 87.3(+1.9)⬆️ 27.2(-4.4) ⬇️

Day-time and Night-time visualization results

Models

You can get the model from here.

Demo Test

We provide two testing method.You can store the image or video.

python demo.py  --source data/example.jpg

Third Parties Resource

License

YOLOPv2 is released under the MIT Licence.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages