Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the results in the paper? #8

Open
uestchjw opened this issue Mar 24, 2024 · 3 comments
Open

How to reproduce the results in the paper? #8

uestchjw opened this issue Mar 24, 2024 · 3 comments

Comments

@uestchjw
Copy link

uestchjw commented Mar 24, 2024

Hi, thanks for your inspiring work. Such a framework is really useful for the community of collaborative perception.

I want to reproduce the results in the paper, so I run "python opencood/tools/train.py -y xxx.yaml". But it seems that the results are much lower than the ones in the paper. The model I have used is CoAlign, V2X-ViT and FCooper. As for the dataset, I use DAIR-V2X-C and the complemented annotations as you mentioned. I train from scratch on a 3090 GPU and install spconv2.x. I use the original configs in this repository, LiDAR-Only of DAIR-V2X and the only change is I add the "noise_setting" part in it.

I am confused about where the problem is. Thank you for any reply.

My results of CoAlign: (inference_w_noise.py)

#Noise=0.2训练, Noise=[0, 0.2, 0.4, 0.6]测试(协同训练, 协同测试)
ap30:
- 0.816185254499443
- 0.8040933486788117
- 0.6978970941049449
- 0.48717034385272445
ap50:
- 0.7732392719112463
- 0.6973580979895226
- 0.37257209996653307
- 0.16327987780088962
ap70:
- 0.6189771349085729
- 0.288480777620073
- 0.06357161684122746
- 0.017638878577427318
noise_setting:
  add_noise: true
  # laplace: true
  args:
    pos_std: 0.2
    rot_std: 0.2
    pos_mean: 0
    rot_mean: 0
@yifanlu0227
Copy link
Owner

yifanlu0227 commented Mar 24, 2024

This repo does not contain the agent-object pose graph optimization for CoAlign. To use agent-object pose graph optimization, please take a look at my CoAlign repo, where all checkpoints are provided.

A good choice is integrating the agent-object pose graph optimization code into HEAL, which would not be too difficult.

@uestchjw
Copy link
Author

Thank you, in fact I also find it seems there is no pose graph optimization in the code of CoAlign but I'm not sure.

But why I can't reproduce the SOTA result of v2xvit and fcooper?

image

The result of v2xvit:
ap30:
- 0.780694734959733
- 0.7655255715912663
- 0.6609692529447369
- 0.469753717908351
ap50:
- 0.6950774891486305
- 0.6160698426259232
- 0.3343741728361003
- 0.16437058587099285
ap70:
- 0.4874117271144235
- 0.23090746717424263
- 0.05669228468952772
- 0.018887158916055982
The result of fcooper:
ap30:
- 0.6957991624234205
- 0.6792384732967583
- 0.5818865248090743
- 0.40475246140625193
ap50:
- 0.5926837593078311
- 0.5126677534820311
- 0.2809762011325662
- 0.13208793899142426
ap70:
- 0.4029150674214991
- 0.1869755262331619
- 0.04623040772861535
- 0.014119602232882318

@yifanlu0227
Copy link
Owner

There will be differences in the communication range, detection range, etc between two repo's yaml configurations.

Please make sure the experimental settings are the same to reproduce these results. BTW, I spconv's version may affect the results but I am not sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants