Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added normalization for predictions. #91

Merged
merged 8 commits into from
Dec 15, 2024

Conversation

karannb
Copy link
Contributor

@karannb karannb commented Nov 10, 2024

Fix for issue #90. I have only added a few lines of code in predict.py so that Roost models can be used for prediction later.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The aleatoric uncertainties would also need to be denormed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think line 108 handles that case as well, as in case the model is robust, preds will contain both (line 113) -

preds, aleat_log_std = preds.T

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I understand the problem you are pointing out, have added a fix.

# denorm the mean and aleatoroc uncertainties separately
mean, log_std = np.split(preds, 2, axis=1)
preds = normalizer.denorm(mean)
ale_std = np.exp(log_std) * normalizer.std
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to put this back to the log space here based on the logic below.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think it would be less code to just add the normalizer into the logic below rather than having to make a new logic block.

@karannb
Copy link
Contributor Author

karannb commented Nov 18, 2024

Hi, I agree with both of your suggestions. I made the changes.

@CompRhys
Copy link
Owner

CompRhys commented Dec 4, 2024

thanks for the fix, can you confirm that this actually works as intended?

@karannb
Copy link
Contributor Author

karannb commented Dec 5, 2024

I have been using this branch for my research and it gives predictions as expected, I will try one more thing as a sanity check today and post here when done.

@CompRhys CompRhys merged commit a2de5b7 into CompRhys:main Dec 15, 2024
3 checks passed
@karannb
Copy link
Contributor Author

karannb commented Dec 26, 2024

Hi, Sorry for the big delay! I got caught up in exams and travelling. Meanwhile I noticed that this code doesn't handle classification predictions. I have added code for that. I am performing the sanity check I was talking about, will raise another PR. The check to detect a normalizer was also incorrect, i.e., this one -

if "normalizer_dict" in checkpoint:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Normalization during training, but missing during evaluation / prediction.
2 participants