Skip to content

Commit

Permalink
Create 2025_gift.md
Browse files Browse the repository at this point in the history
  • Loading branch information
EloiZ authored Nov 24, 2024
1 parent 8e2f4aa commit f8eb27f
Showing 1 changed file with 83 additions and 0 deletions.
83 changes: 83 additions & 0 deletions _publications/2025_gift.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
layout: publication
title: "GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers"
image: assets/img/publications/2025_gift.PNG
hide: false
category: [explainability, driving, foundation]
authors: Éloi Zablocki*, Valentin Gerard*, Amaia Cardiel, Eric Gaussier, Matthieu Cord, Eduardo Valle
authors_internship: Éloi Zablocki*, <u>Valentin Gerard</u>*, Amaia Cardiel, Eric Gaussier, Matthieu Cord, Eduardo Valle
venue: under review
venue_long: under review
year: 2025
month: 6
code_url: https://github.com/valeoai/GIFT
paper_url:
blog_url:
slides_url:
bib_url:
intern_work: true
permalink: /publications/gift/
---

<h1 align="center"> {{page.title}} </h1>
<!-- Simple call of authors -->
<!-- <h3 align="center"> {{page.authors}} </h3> -->
<!-- Alternatively you can add links to author pages -->
<h3 align="center"> <a href="https://scholar.google.fr/citations?user=dOkbUmEAAAAJ">Éloi Zablocki</a> &nbsp;&nbsp; Valentin Gerard &nbsp;&nbsp; Amaia Cardiel &nbsp;&nbsp; <a href="https://ama.liglab.fr/~gaussier/">Eric Gaussier</a> &nbsp;&nbsp; <a href="https://cord.isir.upmc.fr/">Matthieu Cord</a> &nbsp;&nbsp; <a href="https://scholar.google.com/citations?user=lxWPqWAAAAAJ">Eduardo Valle</a></h3>


<h3 align="center"> {{page.venue}} {{page.year}} </h3>

<div align="center">
<p>
{% if page.paper_url %}
<a href="{{ page.paper_url }}"><i class="far fa-file-pdf"></i> Paper</a>&nbsp;&nbsp;
{% endif %}
{% if page.code_url %}
<a href="{{ page.code_url }}"><i class="fab fa-github"></i> Code</a> &nbsp;&nbsp;
{% endif %}
{% if page.blog_url %}
<a href="{{ page.blog_url }}"><i class="fab fa-blogger"></i> Blog</a> &nbsp;&nbsp;
{% endif %}
{% if page.slides_url %}
<a href="{{ page.slides_url }}"><i class="far fa-file-pdf"></i> Slides</a>&nbsp;&nbsp;
{% endif %}
{% if page.bib_url %}
<a href="{{ page.bib_url}}"><i class="far fa-file-alt"></i> BibTeX</a>&nbsp;&nbsp;
{% endif %}
</p>
</div>


<div class="publication-teaser">
<img src="../../{{ page.image }}" alt="project teaser"/>
</div>


<hr>

<h2 align="center">Abstract</h2>

<p align="justify">Understanding deep models is crucial for deploying them in safety-critical applications. We introduce GIFT, a framework for deriving post-hoc, global, interpretable, and faithful textual explanations for vision classifiers. GIFT starts from local faithful visual counterfactual explanations and employs (vision) language models to translate those into global textual explanations. Crucially, GIFT provides a verification stage measuring the causal effect of the proposed explanations on the classifier decision. Through experiments across diverse datasets, including CLEVR, CelebA, and BDD, we demonstrate that GIFT effectively reveals meaningful insights, uncovering tasks, concepts, and biases used by deep vision classifiers. Our code, data, and models are released at https://github.com/valeoai/GIFT.</p>

<br>
<hr>

<h2 align="center">BibTeX</h2>
<left>
<pre class="bibtex-box">
@article{zablocki2025gift,
title = {GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers},
author = {{\'{E}}loi Zablocki and
Valentin Gerard and
Amaia Cardiel and
Eric Gaussier and
Matthieu Cord and
Eduardo Valle},
journal = {arxiv},
year = {2025}
}
</pre>
</left>

<br>

0 comments on commit f8eb27f

Please sign in to comment.