Releases: tidymodels/yardstick
yardstick 0.0.5
Other improvements
-
The
autoplot()
heat map for confusion matrices now places the predicted values on thex
axis and the truth values on they
axis to be more consistent with the confusion matrixprint()
method. -
The
autoplot()
mosaic plot for confusion matrices had thex
andy
axis labels backwards. This has been corrected.
yardstick 0.0.4
New metrics and functionality
-
iic()
is a new numeric metric for computing the index of ideality of correlation. It can be seen as a potential alternative to the traditional correlation coefficient, and has been used in QSAR models (@jyuu, #115). -
average_precision()
is a new probability metric that can be used as an alternative topr_auc()
. It has the benefit of avoiding any issues of ambiguity in the case whererecall == 0
and the current number of false positives is0
.
Other improvements
-
metric_set()
output now includes ametrics
attribute which contains a list of the original metric functions used to generate the metric set. -
Each metric function now has a
direction
attribute attached to it, specifying whether to minimize or maximize the metric. -
Classification metrics that can potentially have a
0
value denominator now throw an informative warning when this case occurs. These includerecall()
,precision()
,sens()
, andspec()
(#98). -
The
autoplot()
method forpr_curve()
has been improved to always set the axis limits toc(0, 1)
. -
All valid arguments to
pROC::roc()
are now utilized, including those passed on topROC::auc()
. -
Documentation for class probability metrics has been improved with more informative examples (@rudeboybert, #100).
Bug fixes
-
mn_log_loss()
now uses the min/max rule before computing the log of the estimated probabilities to avoid problematic undefined log values (#103). -
pr_curve()
now places a1
as the first precision value, rather thanNA
. WhileNA
is technically correct as precision is undefined here,1
is practically more correct because it generates a correct PR Curve graph and, more importantly, allowspr_auc()
to compute the correct AUC. -
pr_curve()
could generate the wrong results in the somewhat rare case when two class probability estimates were the same, but had different truth values. -
pr_curve()
(and subsequentlypr_auc()
) now generates the correct curve when there are duplicate class probability values (reported by @dariyasydykova, #93). -
Binary
mcc()
now avoids integer overflow when the confusion matrix elements are large (#108).
v0.0.3
- A few new metrics from tidyverse developer day.
- Fixed a few yardstick related bugs.
- Updated tests to comply with the R 3.6
sample()
fiasco.
v0.0.2
- Breaking changes regarding stabilizing the API
- Multiclass support
- Curve metrics
- More classification and regression metrics
- Altered documentation to be 1 help file per metric