Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nan Loss #37

Open
qqc222111 opened this issue Dec 3, 2023 · 3 comments
Open

Nan Loss #37

qqc222111 opened this issue Dec 3, 2023 · 3 comments

Comments

@qqc222111
Copy link

image
why is the number 0.9 in ”_bilinear_interpolate(img_masks, u_2_flat, v_2_flat) * img_masks >= 0.9“. Can it be replaced it in order to avoid causing the nan loss value, and what should it be replaced with?

image
When I replace it with 0.24, it can run but the loss_depth_consistency is too small compared to loss_sparse_flow. Is it normal?

image-20231203183544161
and in the function "torch.nn.functional.grid_sample(input=im.permute(0, 3, 1, 2), grid=grid, mode='bilinear',
align_corners=True, padding_mode=padding_mode).permute(0, 2, 3, 1)" in _bilinear_interpolate(),
why the result is out of (0,1), as the the value of im is either 0 or 1.

@qqc222111
Copy link
Author

Now the problem above is nearly solved by adjusting pre_process_data. I remain the number 0.9. The problem is that the loss_depth_consistency is still too small compared to loss_sparse_flow. Is it normal?
image

@qqc222111
Copy link
Author

and the image is not trained completely.

@joelive
Copy link

joelive commented Sep 14, 2024

Now the problem above is nearly solved by adjusting pre_process_data. I remain the number 0.9. The problem is that the loss_depth_consistency is still too small compared to loss_sparse_flow. Is it normal?

how do you adjusting pre_process_data to make loss_depth_consistency normal?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants