xxcatullusxx / transformer-mm-explainability Goto Github PK
View Code? Open in Web Editor NEWThis project forked from hila-chefer/transformer-mm-explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
License: MIT License