-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review logic for setting workflow protocols as 'saved' #487
Comments
Thanks Jose Miguel, probably we: 1: Since a single "continue" of a protocol already could break the reproducibility/traceability in case a certain parameter is modified, we may soon allow this when continuing a workflow assuming current behaviour. 2: We , maybe should detect which parameters have changed before the "execute·" and if it is "one of those affecting the output" (protocol may need to tell pyworkflow which ones do and which ones don't). By default, I'd say all specific parameters for the protocols affect the output. Then we should provide a way to define those that do not affect continuation. |
For the point # 2, would it make sense to say that changing any compute parameters (top part of any protocol form) would not affect the output? |
Yes, a first improvement could be what Grigory proposed.
We need to check and think about some cases, but I guess Pablo's assumption is correct, that most parameters will affect the output. Other parameters are more computing (e.g. batchSize for streaming). |
100% agree
El vie., 15 dic. 2023 16:33, Jose Miguel de la Rosa Trevin <
***@***.***> escribió:
… Yes, a first improvement could be what Grigory proposed.
1. Any change in computing parameters should not affect and be
flexible.
2. We could make it a bit flexible and allow execution with a warning
and let the user decide.
We need to check and think about some cases, but I guess Pablo's
assumption is correct, that most parameters will affect the output. Other
parameters are more computing (e.g. batchSize for streaming).
—
Reply to this email directly, view it on GitHub
<#487 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAF7ZYOBPXRQNZY472GGAFTYJRUTVAVCNFSM6AAAAABALUUQ5OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJYGA3DSMRUGA>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
I will explain my example where some processing/time has been lost due to this issue
Setup of streaming workflow overnight: import movies -> motioncor (3 gpus) -> cryolo picking (1GPU) -> relion particles extraction
The next day motioncor has finished, but cryolo had been the bottleneck and there were still many micrographs to pick (and therefore extract)
I stopped cryolo, changed the threads/GPUs to add more gpus and finish the task quicker....then when stopped, I selected 'Single' and after Continue, the subsequent 'relion-extraction' job has been changed from 'aborted' to 'saved'
So, the cryolo will continue and avoid already picked micrographs, but the extraction job is starting now from scratch
The text was updated successfully, but these errors were encountered: