You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
menpobench currently goes as far as building and testing methods, gets the fitting results, and then...does nothing with them.
We need to persist the results from a given method. I think a .pkl of all the fitting results should be stored in the menpobench cache, that way at a later date we can recompute different graphs (i.e. handle different normalization formats).
The current design in mind is as follows:
For a given unique combination of: (training_dataset, testing_dataset, method)
or, in the case of untrainable_methods: (testing_dataset, untrainable_method)
if all components are pre-defined, cache the resulting fitting results to the cache/results/hash.pkl. On future invocations, check this cache, and return if it's available. We will also add a predefined cache that ships with menpobench - this will be updated from the continuous integration server, so for most common tests menpobench can complete immediately.
If any component is user defined, we don't use the cache at all. There is a reasonable expectation that these non-predefined files may be modified by the user, and we don't want to accidentally serve a cached result. We could definitely make this more advanced in the future, but for now, KISS.
We then write a generic output generation function that takes a set of fitting results and draws up the curves in various forms (e.g. different normalisation schemes). The user could have to provide an output dir which we populate with:
The pkl of the experiment (regardless of whether the test was predefined or not)
Graphs for commonly used normalisation
CSVs for the above graphs.
A flag could be added to return a mat file if the user wishes to easily import the fitting result data into matlab. With this, we would be heading towards something like:
The only issue is that at the moment our fitting results are ludicrously large to put down on disk. So we either need to revisit the results, to make them leaner, or just save the final PTS files or something. Probably we should make fitting results more useful by ensuring they are a lot leaner!
We now have a basic skeleton for this working, with caching. I'm not saying it's perfect, but we can make smaller issues now to address specific shortcomings (like #10#12) so I'm closing this.
menpobench currently goes as far as building and testing methods, gets the fitting results, and then...does nothing with them.
We need to persist the results from a given method. I think a .pkl of all the fitting results should be stored in the menpobench cache, that way at a later date we can recompute different graphs (i.e. handle different normalization formats).
The current design in mind is as follows:
For a given unique combination of:
(training_dataset, testing_dataset, method)
or, in the case of untrainable_methods:
(testing_dataset, untrainable_method)
if all components are pre-defined, cache the resulting fitting results to the cache/results/hash.pkl. On future invocations, check this cache, and return if it's available. We will also add a predefined cache that ships with menpobench - this will be updated from the continuous integration server, so for most common tests menpobench can complete immediately.
If any component is user defined, we don't use the cache at all. There is a reasonable expectation that these non-predefined files may be modified by the user, and we don't want to accidentally serve a cached result. We could definitely make this more advanced in the future, but for now, KISS.
We then write a generic output generation function that takes a set of fitting results and draws up the curves in various forms (e.g. different normalisation schemes). The user could have to provide an output dir which we populate with:
A flag could be added to return a mat file if the user wishes to easily import the fitting result data into matlab. With this, we would be heading towards something like:
With all this the API would look like:
The text was updated successfully, but these errors were encountered: