Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Excessive VRAM usage on SDXL VAE decode stage #7587

Open
1 task done
fintarn opened this issue Jan 23, 2025 · 1 comment
Open
1 task done

[bug]: Excessive VRAM usage on SDXL VAE decode stage #7587

fintarn opened this issue Jan 23, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@fintarn
Copy link

fintarn commented Jan 23, 2025

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3080

GPU VRAM

10GB

Version number

5.6.0

Browser

Invoke Client

Python dependencies

No response

What happened

  • Installed Invoke from fresh install
  • Installed models, did some Flux-dev generations without issues
  • Load a SDXL CheckPoint with sdxl_vae with 32bit precision
  • Generate image to canvas, notice after all the steps are complete nothing is happening. This would at the time of the VAE decode, before the image is completed. Checked task manager and VRAM is filled and spilling into system RAM by 1.5-3 GB
  • Behaviour is consistent, even with 16-bit VAE
  • Afterwards VRAM is unloaded to 5-6 GB for next generation, again hitting very high levels and spilling over into RAM at the time of VAE decode.

Tested resolutions: 1152 x 896 & 1024 x 1024, same behaviour
This used to work well in previous versions by the time support for drawing pads were introduced (5.1?)
This is a standard installation, with exception for invoke.yaml being edited and below parameter being added:
enable_partial_loading: true

Not sure if it's for my system only but I never had this issue with Forge or ComfyUI.
Any idea what may be causing this weirdness with memory management and how one might fix it?
I have sysmem fallback enabled and I don't want to change it. Whats weird is there is no issues with Flux, even with full-size T5 which consumes more VRAM by magnitudes.

What you expected to happen

I expect to generate a picture with an SDXL model without needing to use 11.5-13 GB VRAM at the time of VAE decode.

How to reproduce the problem

  • Use 10GB VRAM card
  • Standard installation Invoke
  • Invoke.yaml - Add: enable_partial_loading: true
  • Generate a 1152 x 896 or 1024 x 1024 image to canvas using SDXL + sdxl_vae.safetensors (or auto)
  • At the time of VAE decode, excessive VRAM is used.

Additional context

No response

Discord username

No response

@fintarn fintarn added the bug Something isn't working label Jan 23, 2025
@freelancer2000
Copy link

I am also seeing an issue on FLUX side as well. I can generate a couple of images at 2-3 s/it ... and after a while the whole PC starts lagging and generating takes a very long time 20-30 seconds/it, unless I close python/invoke and start it up again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants