You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
why is the number 0.9 in ”_bilinear_interpolate(img_masks, u_2_flat, v_2_flat) * img_masks >= 0.9“. Can it be replaced it in order to avoid causing the nan loss value, and what should it be replaced with?
When I replace it with 0.24, it can run but the loss_depth_consistency is too small compared to loss_sparse_flow. Is it normal?
and in the function "torch.nn.functional.grid_sample(input=im.permute(0, 3, 1, 2), grid=grid, mode='bilinear',
align_corners=True, padding_mode=padding_mode).permute(0, 2, 3, 1)" in _bilinear_interpolate(),
why the result is out of (0,1), as the the value of im is either 0 or 1.
The text was updated successfully, but these errors were encountered:
Now the problem above is nearly solved by adjusting pre_process_data. I remain the number 0.9. The problem is that the loss_depth_consistency is still too small compared to loss_sparse_flow. Is it normal?
Now the problem above is nearly solved by adjusting pre_process_data. I remain the number 0.9. The problem is that the loss_depth_consistency is still too small compared to loss_sparse_flow. Is it normal?
how do you adjusting pre_process_data to make loss_depth_consistency normal?
why is the number 0.9 in ”_bilinear_interpolate(img_masks, u_2_flat, v_2_flat) * img_masks >= 0.9“. Can it be replaced it in order to avoid causing the nan loss value, and what should it be replaced with?
When I replace it with 0.24, it can run but the loss_depth_consistency is too small compared to loss_sparse_flow. Is it normal?
and in the function "torch.nn.functional.grid_sample(input=im.permute(0, 3, 1, 2), grid=grid, mode='bilinear',
align_corners=True, padding_mode=padding_mode).permute(0, 2, 3, 1)" in _bilinear_interpolate(),
why the result is out of (0,1), as the the value of im is either 0 or 1.
The text was updated successfully, but these errors were encountered: