Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on the Implementation for Splatfacto Pruning Condition #3545

Open
Tavish9 opened this issue Dec 10, 2024 · 2 comments
Open

Clarification on the Implementation for Splatfacto Pruning Condition #3545

Tavish9 opened this issue Dec 10, 2024 · 2 comments

Comments

@Tavish9
Copy link
Contributor

Tavish9 commented Dec 10, 2024

hi, @jb-ye and @brentyi, thank you for PR #3376, which simplified Splatfacto using the gsplat strategy.

I have a question regarding the default strategy and the original implementation for pruning GSs using alpha contition, although some time has passed.

In the gsplat implementation, the pruning condition in gsplat/strategy/default.py is defined as follows:

self.reset_every = self.config.reset_alpha_every * self.config.refine_every
if step % self.reset_every == 0:
  reset_opa(
      params=params,
      optimizers=optimizers,
      state=state,
      value=self.prune_opa * 2.0,
  )

However, in the original implementation in splatfacto v1.1.4, the condition is:

reset_interval = self.config.reset_alpha_every * self.config.refine_every
if self.step < self.config.stop_split_at and self.step % reset_interval == self.config.refine_every:
    reset_value = self.config.cull_alpha_thresh * 2.0
    self.opacities.data = torch.clamp(
        self.opacities.data,
        max=torch.logit(torch.tensor(reset_value, device=self.device)).item(),
    )

It seems tha the resetting behavior has been moved slightly forward. Could you clarify the main considerations behind this change?

Furthermore, in official implementation of 3dgs, the condition is the same as what gsplat does:

if iteration % opt.opacity_reset_interval == 0 or (dataset.white_background and iteration == opt.densify_from_iter):
    gaussians.reset_opacity()

I am also curious to understand why the original implementation specifically uses self.config.refine_every in the condition.

Besides, I conducted experiments using versions v1.1.4 and v1.1.5 with the same code and dataset, and I encountered something wired.

Although the final results were similar between the two versions, there were some notable differences:
1. In v1.1.5, the pruning of GSs was less aggressive, resulting in a significantly larger number of GSs being retained.
2. When I modified the alpha condition in v1.1.5 to match the condition used in v1.1.4, the behavior of the two versions became almost the same.

I would appreciate any insights you have into these differences and the potential reasons behind them.

@jb-ye
Copy link
Collaborator

jb-ye commented Dec 12, 2024

I think the current behavior of resetting (v1.1.5) is what we desired. If we reset opacity earlier, we have more iterations to bring back the opacity post resetting because we have a mechanism to pause refinement after resetting ( https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/models/splatfacto.py#L264 ). Therefore, v1.1.5 produces more gaussians.

@Tavish9
Copy link
Contributor Author

Tavish9 commented Dec 13, 2024

Thanks for reply, the issue is now resolved. However, the current default settings can slightly impact the evaluation metrics due to the timing, though this isn’t a bug.

In the default configuration, --steps-per-eval-all-images 1000 --pipeline.model.reset-alpha-every 30 --pipeline.model.refine-every 100, the model performs an evaluation every 1000 steps and resets alpha every 3000 steps. This results in significantly lower scores at steps like 3000, 6000, etc., because evaluation occurs after the reset (see this line of code).

If evaluations were conducted in advanced, the metrics would reflect the model’s performance more accurately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants