-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Challenge problem integration for State Set --> submissions --> run generation #8
Comments
ipendlet
pushed a commit
that referenced
this issue
Nov 9, 2018
ipendlet
added a commit
that referenced
this issue
Nov 16, 2018
Merged
Put UID of generated WF file in teh stateset header. Save a copy of this file to the google drive (publicly viewable) and leave for later use. Closed loop will take the template file, generate the run from the experimental template file and fill in the volume entries use the lookup of the link. Alternatively, we could remove the link entirely and just regenerate the stateset from the template. |
miketynes
pushed a commit
that referenced
this issue
May 17, 2019
miketynes
pushed a commit
that referenced
this issue
May 17, 2019
miketynes
pushed a commit
that referenced
this issue
May 17, 2019
miketynes
pushed a commit
that referenced
this issue
May 17, 2019
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The challenge problem requires that an initial state set of the maximum accessible physical space is generated, relevant physical features and chemical descriptors are pulled in and a training set and state set are generated for distribution of each weekly amine. A new feature is needed to also take in the request and add them to the robot file.
Thoughts:
Is it worth merging the entire data workup code with the front end in order to do training set generation / state set generation in line? There are many redundant functions. A separate command could be used to run the other portion of the code using the same execution script or some modified version of the execution script. (Aka, ignore all of the data entry but go through an entirely different pathway in the code?) Or should the code just pull the relevant functions but the code bases remain separate? This would require updating both whenever the dataworkup side of the code gets changed. Alternatively, the dataworkup could be a subdirectory of this code which is executed as part of the challenge problem?
Overview of the planned changes for the code for CP runs:
Phase 1:
Phase 2:
The text was updated successfully, but these errors were encountered: