-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Normalization on input point cloud #25
Comments
Hi, thanks for the question. Why performance drop if use larger scale? - some training shapes are cut off. |
Thanks for your reply! But, I am curious about its negative impact on the fidelity of predicted mesh due to the normalization to [-.6, .6]. May I ask why this is the case for the ShapeFormer? and may I ask if this can be improved (maybe adaptively resize the input partial point cloud)? Also, I am curious why the ShapeFormer's dataloader conducts data augmentation (apply_random_transforms function) in test or demo phase as shown in this code? In my perspective, this should only be in training phase. Sorry for too many questions and always thanks for your great feedbacks! |
May i ask why input point clouds are being noramlized to fit to [-.6, .6] as implemented in L51-L53.
For me, it seems counter-intuitive because the bigger the input point cloud is, the more representation (or more detail) the model can learn. In an extreme case where all input point clouds are fit to [-.1, .1], then the ShapeFormer model would not be able to learn much as the point cloud is represented in a very small region.
So, I tried to fit the input point clouds to [-1, 1], but performance degraded when comparing to the original code.
May I ask why such performance degradation happens? And may I ask what scale or size (e.g., fitting to [-.6, .6] unit cube) does a custom dataset's point clouds should be?
The text was updated successfully, but these errors were encountered: