Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix confusion matrix code #76

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

matteopilotto
Copy link

fix issues mentioned in #75.

@lewtun @lvwerra when you're free, please take a look at it.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@@ -15,16 +15,15 @@
},
Copy link
Member

@lvwerra lvwerra Jan 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the order was ok before no? I know that it's a bit inconsistent that plot_confusion_matrix and confusion_matrix use reversed order of arguments but I think it's functionally correct, no?


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for double checking Leandro. I triple checked and I still believe the code in your notebook and the resulting confusion matrix are not correct.

In your notebook you define plot_confusion_matrix as

def plot_confusion_matrix(y_preds, y_true, labels):

where the order of the input is first predictions, then ground truth labels.

However when you call the function you're passing the function inputs in the reverse order. First ground truth labels, then predicted labels:

plot_confusion_matrix(
  df_tokens["labels"],
  df_tokens["predicted_label"],
  tags.names
)

Hopefully with code what I'm trying to highlight is more clear.

In any case, this is just one aspect that, in my opinion, makes the confusion matrix inaccurate. The second one is the mismatch between the values in the confusion matrix and the labels.

If you run this simple code

df_ILOC = df_tokens[df_tokens['labels'] == 'I-LOC']
(df_ILOC['labels'] == df_ILOC['predicted_label']).sum() / len(df_ILOC)

to check the accuracy of the I-LOC label, you will immediately notice that the model doesn't predict this label correctly 99% of the times as the confusion matrix produced by your notebook (and showed in the book) tells us. In fact, the model accuracy for this specific label (i.e. I-LOC) is around 85% and the actual label predicted correctly with 99% accuracy is O.

The problems arises because sklearn confusion matrix sorts string inputs (i.e. our target and predicted labels) in alphabetical order. However, when you pass the labels to display here,

disp = ConfusionMatrixDisplay(
  confusion_matrix=cm,
  display_labels=labels
)

you're passing then according to the order defined in the dataset which is not alphabetically

['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']

Again thanks for your time and feel free to reach out to me at any time if you have any further questions or comments.

This open-source contributing is becoming more and more interesting...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants