-
-
Notifications
You must be signed in to change notification settings - Fork 814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add F1 score, precision, and recall metrics as MultilabelSegmentation default metrics #1336
base: develop
Are you sure you want to change the base?
Add F1 score, precision, and recall metrics as MultilabelSegmentation default metrics #1336
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## develop #1336 +/- ##
===========================================
+ Coverage 32.98% 33.13% +0.15%
===========================================
Files 64 65 +1
Lines 4072 4134 +62
===========================================
+ Hits 1343 1370 +27
- Misses 2729 2764 +35
☔ View full report in Codecov by Sentry. |
Before merging I think i need to reshape what's passed to the metric in the validation, so that it's compatible with more metrics (currently i think flat tensors are passed). |
@@ -251,10 +254,25 @@ def validation_step(self, batch, batch_idx: int): | |||
|
|||
# mask (frame, class) index for which label is missing | |||
mask: torch.Tensor = y_true != -1 | |||
y_pred = y_pred[mask] | |||
y_true = y_true[mask] | |||
y_pred = y_pred[mask].reshape(shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will break as soon as mask
contains at least one -1
because the overall size of y_pred
will then be smaller than shape
.
Still support "global" metrics, BUT they have to be of a binary type.
Writing down some things before i forget them:
(we can discuss this tomorrow !) |
This reverts commit a890ef6.
(uses ignore_index for that, which ignores all targets of that index for computations, in pyannote's case : -1)
…annote-audio into multilabel_default_metrics
Is this ready for review? |
As discussed right now, would be nice to try |
BREAKING(model): get rid of (flaky) `Model.introspection`
…` pipeline Co-authored-by: Hervé BREDIN <hbredin@users.noreply.github.com>
Going over your PRs :) Is this mergeable? |
I should test it before, but i'm done with the implementation (i dont know if it's ok for you though :) ). In the end I didn't find how to do away with "metric_classwise", there's a torchmetric classwise wrapper but it doesn't do what we want. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Currently the MultilabelSegmentation task has no default metric, this PR adds these 3 (torchmetrics) metrics as default.