Diversify-and-Aggregate: Augmenting Replay with Generative Modeling Make Stronger Incremental Segmentation Models
This is an official implementation of the paper "Diversify-and-Aggregate: Augmenting Replay with Generative Modeling Make Stronger Incremental Segmentation Models".
This repository has been tested with the following libraries:
- Python (3.9)
- Pytorch (2.2.0)
We use augmented 10,582 training samples and 1,449 validation samples for PASCAL VOC 2012. You can download the original dataset in here. To train our model with augmented samples, please download labels of augmented samples ('SegmentationClassAug') and file names ('train_aug.txt'). The structure of data path should be organized as follows:
└── /dataset/VOC2012
├── Annotations
├── ImageSets
│ └── Segmentation
│ ├── train_aug.txt
│ └── val.txt
├── JPEGImages
├── SegmentationClass
└── SegmentationClassAug
We use 20,210 training samples and 2,000 validation samples for ADE20K. You can download the dataset in here. The structure of data path should be organized as follows:
└── /dataset/ADEChallengeData2016
├── annotations
├── images
├── objectInfo150.txt
└── sceneCategories.txt
# An example srcipt for 15-5 overlapped setting of PASCAL VOC
GPU=0,1
BS=16 # Total 32
SAVEDIR='saved_voc_pos2'
TASKSETTING='overlap' # or 'disjoint'
TASKNAME='15-5' # or ['15-1', '19-1', '10-1', '5-3']
EPOCH=60
INIT_LR=0.001
LR=0.0001
INIT_POSWEIGHT=2
MEMORY_SIZE=100
NAME='DA'
python train_voc.py -c configs/config_voc.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 0 --lr ${INIT_LR} --bs ${BS} --pos_weight_new ${INIT_POSWEIGHT}
python train_voc.py -c configs/config_voc.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 1 --lr ${LR} --bs ${BS} --freeze_bn --mem_size ${MEMORY_SIZE} --pos_weight_new 1 --pos_weight_old 1 --pkd 5 --mbce_new_extra 1 --mbce_old_extra 1 --use_Replace
# An example srcipt for 50-50 overlapped setting of ADE20K
GPU=0,1
BS=12 # Total 24
SAVEDIR='saved_ade'
TASKSETTING='overlap'
TASKNAME='50-50' # or ['100-10', '100-50']
EPOCH=100
INIT_LR=0.0025
LR=0.00025
MEMORY_SIZE=300
NAME='DA'
python train_ade.py -c configs/config_ade.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 0 --lr ${INIT_LR} --bs ${BS}
python train_ade.py -c configs/config_ade.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 1 --lr ${LR} --bs ${BS} --freeze_bn --mem_size ${MEMORY_SIZE} --pos_weight_new 1 --pos_weight_old 1 --pkd 1 --mbce_new_extra 1 --mbce_old_extra 1 --use_Replace
python eval_voc.py -d 0 -r path/to/weight.pth
We provide pretrained weights, augmented images and adapter checkpoint (lora, text token) and configuration files from this link.
- configuration files.
- pretrained weights,
- augmented images and adapter checkpoint (lora, text token)
- code for fine-tuning MR-LoRA (coming soon)
- This template is borrowed from pytorch-template.
- This code is based on DKD (2022-NeurIPS Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation).