How is context merge supposed to be used properly as intended by the author? #85
Replies: 2 comments
-
Note, you may have a "bug" in your workflow. The bottom conditioning has the positive connected to both context 3's Positive and Negative inputs, which is why that second image looks a little funky. I'm going to assume you meant to connect that bottom ClipTextEncode to Context 3's negative inputs. The Context Merge node merges non-null values from multiple contexts favoring latter contexts. In your workflow, you're merging in three contexts that ultimately result in a context with Here's an addition to your workflow that shows a new context merge that has the same values as the context merge node (note, I did connect that bottom ClipTextEncode to context 3's negative input, instead of the top ClipTextEncode going to both the positive and negative). |
Beta Was this translation helpful? Give feedback.
-
Understood and thank you so much for replying on NYE. I now 'get' why there's a merge node - the bug was because I rushed to create a little example of what I mean :) Very useful nodes! Happy New Year! |
Beta Was this translation helpful? Give feedback.
-
Happy New Year! 🥇
question about merge.json
If i use two big context nodes, I notice i can use them to overide the settings of one another - very handy.
But when i do the same using 'context merge' in the middle, i noticed it 'favors' the top most context fed into it by order decreasing.
I think that's the purpose.
In the example attached, the conditionning is applied to a latent that never should make it to the 2nd ksampler, yet it does.
Does this mean context merge node somehow 'feed backwards' the information back to the 2nd ksampler? I understand comfy reads 'backwards' but still, this shouldn't work, surely, unless the data flows 'back' to the merge?
Beta Was this translation helpful? Give feedback.
All reactions