Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confused about the file called final_model_release #350

Closed
znygithub opened this issue Jan 11, 2025 · 2 comments
Closed

confused about the file called final_model_release #350

znygithub opened this issue Jan 11, 2025 · 2 comments
Assignees

Comments

@znygithub
Copy link

Hello, excuse me, I would like to know if the dataset mentioned before each model weight indicates that the model was trained using this dataset. If that's the case, what is the ratio of the training and validation sets used? For example, was the training done using 0-0.6 of PURE, and the testing using 0.6-0.8 of PURE?
image

@yahskapar yahskapar self-assigned this Jan 11, 2025
@yahskapar
Copy link
Collaborator

Hi @znygithub,

This entire repo focuses on cross-dataset results, so all of those pre-trained models are named as per what training data was used followed by the neural method. In the case of BigSmall, the fold that was held out for test is also included (correct me if I'm wrong regarding the meaning here for BigSmall's fold number, @girishvn). You can subsequently apply the pre-trained models to any other test dataset.

Typically, the splits will always be 0.0-0.8 for training and 0.8-1.0 for validation, as indicated in default config files in the repo such as this one that corresponds to PURE_TSCAN.pth. You can, of course, modify the config files and train a model for within-dataset, or intra-dataset, training and use splits such as 0.0-0.6 for train, 0.6-0.8 for validation, and 0.8-1.0 for test.

@yahskapar
Copy link
Collaborator

Closing due to the lack of follow-up discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants