Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Normalization on input point cloud #25

Open
dqj5182 opened this issue Dec 2, 2024 · 2 comments
Open

Normalization on input point cloud #25

dqj5182 opened this issue Dec 2, 2024 · 2 comments

Comments

@dqj5182
Copy link

dqj5182 commented Dec 2, 2024

May i ask why input point clouds are being noramlized to fit to [-.6, .6] as implemented in L51-L53.

For me, it seems counter-intuitive because the bigger the input point cloud is, the more representation (or more detail) the model can learn. In an extreme case where all input point clouds are fit to [-.1, .1], then the ShapeFormer model would not be able to learn much as the point cloud is represented in a very small region.

So, I tried to fit the input point clouds to [-1, 1], but performance degraded when comparing to the original code.
May I ask why such performance degradation happens? And may I ask what scale or size (e.g., fitting to [-.6, .6] unit cube) does a custom dataset's point clouds should be?

@QhelDIV
Copy link
Owner

QhelDIV commented Dec 2, 2024

Hi, thanks for the question.
Actually this step won't effect the scale of the final augmented shape.
It is mainly used for rotation. Sometimes after rotation the shape will go out of the [-1,1] range.
And it is relatively harder to prevent this from happening.
So the easiest solution is to make the shape smaller, apply rotation and apply random scale again.
Why .6 specifically? because .6 is roughly equal to 1/sqrt(3) where sqrt(3) is the radius of the minimum bounding ball of a unit cube. This way, we know the shape won't go out of the cube for any rotation.

Why performance drop if use larger scale? - some training shapes are cut off.
How to determine the value for custom dataset? You can just use .6 since the 'apply_random_scaling' function will handle the scale again. And since it runs after rotation we are safe.

@dqj5182
Copy link
Author

dqj5182 commented Dec 21, 2024

Thanks for your reply!

But, I am curious about its negative impact on the fidelity of predicted mesh due to the normalization to [-.6, .6].
For example, if you run the demo of the official repo as it is, we get the mesh below:
0_s1_decoded
, which seem to be smaller than it should be and this results in the mesh modeled in limited number of quantized features as visualized below:
0_s0_quant_ind

May I ask why this is the case for the ShapeFormer? and may I ask if this can be improved (maybe adaptively resize the input partial point cloud)?

Also, I am curious why the ShapeFormer's dataloader conducts data augmentation (apply_random_transforms function) in test or demo phase as shown in this code? In my perspective, this should only be in training phase.

Sorry for too many questions and always thanks for your great feedbacks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants