You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, excuse me, I would like to know if the dataset mentioned before each model weight indicates that the model was trained using this dataset. If that's the case, what is the ratio of the training and validation sets used? For example, was the training done using 0-0.6 of PURE, and the testing using 0.6-0.8 of PURE?
The text was updated successfully, but these errors were encountered:
This entire repo focuses on cross-dataset results, so all of those pre-trained models are named as per what training data was used followed by the neural method. In the case of BigSmall, the fold that was held out for test is also included (correct me if I'm wrong regarding the meaning here for BigSmall's fold number, @girishvn). You can subsequently apply the pre-trained models to any other test dataset.
Typically, the splits will always be 0.0-0.8 for training and 0.8-1.0 for validation, as indicated in default config files in the repo such as this one that corresponds to PURE_TSCAN.pth. You can, of course, modify the config files and train a model for within-dataset, or intra-dataset, training and use splits such as 0.0-0.6 for train, 0.6-0.8 for validation, and 0.8-1.0 for test.
Hello, excuse me, I would like to know if the dataset mentioned before each model weight indicates that the model was trained using this dataset. If that's the case, what is the ratio of the training and validation sets used? For example, was the training done using 0-0.6 of PURE, and the testing using 0.6-0.8 of PURE?
The text was updated successfully, but these errors were encountered: