You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When fitting with multiple models where we vary the number of peaks, currently, we use the chi squared to determine the best model with the least parameters. Better practice would be to change this to use Akaike Information Criterion (AIC) or some other IC OR, if not too difficult to extract, the Bayesian evidence.
The text was updated successfully, but these errors were encountered:
@DanielaBreitman I added an initial stats.py file in fitburst/analysis, which only contains a function to compute an F-test statistic. however, F-tests are certainly not the only thing and I definitely agree that AIC and equivalent tests should be available and used. how about we add one or a few additional functions, like AIC and and Bayes-factor calculations, into this file?
When fitting with multiple models where we vary the number of peaks, currently, we use the chi squared to determine the best model with the least parameters. Better practice would be to change this to use Akaike Information Criterion (AIC) or some other IC OR, if not too difficult to extract, the Bayesian evidence.
The text was updated successfully, but these errors were encountered: