You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
First of all, thanks for sharing this amazing work!
I was wondering if you have tried more fine-grained sampling strategy, meaning lowering the granularity when defining the positive and negative samples from the ground truth, and increasing the number of samples / queries?
Currently the granularity $\sigma=0.1$ and you sample 550 points during training. I was wondering if you decrease the $\sigma$ to 0.01 or 0.001, will this create reconstruction with more details and how much more time will it cost? Thanks.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
First of all, thanks for sharing this amazing work!
I was wondering if you have tried more fine-grained sampling strategy, meaning lowering the granularity when defining the positive and negative samples from the ground truth, and increasing the number of samples / queries?
Currently the granularity$\sigma=0.1$ and you sample 550 points during training. I was wondering if you decrease the $\sigma$ to 0.01 or 0.001, will this create reconstruction with more details and how much more time will it cost? Thanks.
The text was updated successfully, but these errors were encountered: