You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a general question, which I failed to find the answer in code and google, so I am asking her for help to understand the following:
The flux model can be finetuned with its transformer (unet) weight in fp8, data in fp32 and autocast to bf16. While I verified this is indeed working, I cannot reproduce it with simple code.
The above code failed with RuntimeError: Promotion for Float8 Types is not supported, attempted to promote Float8_e4m3fn and Float. But somehow this just works in finetuning flux. Where is the magic? Many thanks!
Logs
No response
Other
Using torch 2.5.1, cuda 12.4, card 4090.
The text was updated successfully, but these errors were encountered:
Your question
This is a general question, which I failed to find the answer in code and google, so I am asking her for help to understand the following:
The flux model can be finetuned with its transformer (unet) weight in fp8, data in fp32 and autocast to bf16. While I verified this is indeed working, I cannot reproduce it with simple code.
For example:
The above code failed with
RuntimeError: Promotion for Float8 Types is not supported, attempted to promote Float8_e4m3fn and Float
. But somehow this just works in finetuning flux. Where is the magic? Many thanks!Logs
No response
Other
Using torch 2.5.1, cuda 12.4, card 4090.
The text was updated successfully, but these errors were encountered: