-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add sample uncertainty to score decompose #72
Comments
@lorentzenchr do you have any reference for implementing this? This feature sounds very useful and I would be happy to contribute |
I'm thinking of a new function |
@lorentzenchr thanks for clarifying. score_per_obs_de_meaned = score_per_obs - np.mean(score_per_obs)
scipy.special.stdtr(len(score_per_obs) - 1,
-np.abs(score_per_obs_de_meaned / stderr(score_per_obs)) I ignored the weights for the time being. What do you think? |
I would use the model predictions instead of the score per obs, pretty much a blend of def compute_score(
y_obs,
y_pred,
feature,
weights,
scoring_function,
functional,
level,
n_bins,
): |
the t-test in compute_bias is testing whether the bias per observation has 0 mean, right? What would be the null hypothesis in the |
I guess uncertainty / confidence intervals would be enough. As you say, for bias there is a universal reference, i.e. zero, for scores all pairwise comparison are options, that's way too many. |
No description provided.
The text was updated successfully, but these errors were encountered: