This is gpu implementation of rocauc pairwise objectives for gradient boosting:
Also there is cpu multithread implementation of this losses.
-
Sigmoid pairwise loss. (GPU or CPU implementations)
$$L = \sum_{i, j}\hat P_{ij}\log{P_{ij}} + (1 - \hat P_{ij})\log{(1 - P_{ij})}$$ -
RocAuc Pairwise Loss with approximate auc computation. (GPU or CPU implementations)
$$L = \sum_{i, j} (\hat P_{ij}\log{P_{ij}} + (1 - \hat P_{ij})\log{(1 - P_{ij})})|\Delta_{AUC^{approx}_{ij}}|$$ -
RocAuc Pairwise Loss Exact (GPU or CPU implementations) with exact auc computation. This could be more compute intensive, but this loss might be helpfull for first boosting rounds (if you are using gradient boosting)
$$L = \sum_{i, j} (\hat P_{ij}\log{P_{ij}} + (1 - \hat P_{ij})\log{(1 - P_{ij})})|\Delta_{AUC^{exact}_{ij}}|$$ -
RocAuc Pairwise Loss Exact Smoothened (GPU or CPU implementations). This loss allows you to incorporate information about equal instances. Because
$\Delta_{AUC_{ij}} = 0$ if$y_i = y_j$ . So we just add small$\epsilon > 0$ in equation.$$L = \sum_{i, j} (\hat P_{ij}\log{P_{ij}} + (1 - \hat P_{ij})\log{(1 - P_{ij})})(\epsilon + |\Delta_{AUC^{exact}_{ij}}|)$$
Intel Core i5 10600KF (12 threads), Nvidia RTX 3060
- You can see GitHub repository.
- Or you can use example notebook on Google Colab
- Or you can use example notebook on Kaggle
[1] Sean J. Welleck, Efficient AUC Optimization for Information Ranking Applications, IBM USA (2016)
[2] Burges, C.J.: From ranknet to lambdarank to lambdamart: An overview. Learning (2010)
[3] Calders, T., Jaroszewicz, S.: Efficient auc optimization for classification. Knowledge
Discovery in Databases. (2007)