Note: We will publish the weights, dataset splits, databases and results after Review at MICCAI 2024
Content-based image retrieval (CBIR) has the potential to significantly improve diagnostic aid and medical research in radiology. Current CBIR systems face limitations due to their specialization to certain pathologies, limiting their utility. In response, we propose using vision foundation models as powerful and versatile off-the-shelf feature extractors for content-based medical image retrieval. By benchmarking these models on a comprehensive dataset of 1.6 million 2D radiological images spanning four modalities and 161 pathologies, we identify weakly-supervised models as superior, achieving a P@1 of up to 0.594. This performance not only competes with a specialized model but does so without the need for fine-tuning. Our analysis further explores the challenges in retrieving pathological versus anatomical structures, indicating that accurate retrieval of pathological features presents greater difficulty. Despite these challenges, our research underscores the vast potential of foundation models for CBIR in radiology, proposing a shift towards versatile, general-purpose medical image retrieval systems that do not require specific tuning.
For usage of any code segments, please cite our work:
@article{denner2024leveraging,
title={Leveraging Foundation Models for Content-Based Medical Image Retrieval in Radiology},
author={Denner, Stefan and Zimmerer, David and Bounias, Dimitrios and Bujotzek, Markus and Xiao, Shuhan and Kausch, Lisa and Schader, Philipp and Penzkofer, Tobias and J{"a}ger, Paul F and Maier-Hein, Klaus},
journal={arXiv preprint arXiv:2403.06567},
year={2024}
}
- Python — Ensure Python is installed on your machine. This project is compatible with Python 3.8.10 and above.
- Poetry — You'll need Poetry to manage dependencies and set up the environment. Install it with this command:
pip install poetry==1.8.2
Install all required Python dependencies:
We strongly recommend using poetry for dependency managment to ensure reproducibility.
# If you want the virual enviroment set up in project
export POETRY_VIRTUALENVS_IN_PROJECT=true
# Install packages
poetry install
# Activate the virtual enviroment created
poetry shell
Note: Be aware of this issue, if the installation is stuck in pending.
Alternatively, we also provide the requirements.txt but would not recommend using it since secondary versions are not fixed, which could result in unexpected behaviour.
# Create a virtual environment
python3 -m venv .venv
# Install packages
pip install -r requirements.txt
# Activate the virtual enviroment created
source .venv/bin/active
There is an issue with loading pretrained weights for MedCLIP. If you want to create the embeddings for MedCLIP yourself, you have to adjust the following line in the MedCLIP codebase.
Download and prepare the datasets using the links below, storing them in a common directory.
- NIH14: NIH14 Dataset
- CheXpert: CheXpert Dataset
- MIMIC: MIMIC Dataset
- RadImageNet: RadImageNet Dataset
Dataset splits are provided in /datasets
. For adding new datasets, mimic the provided CSV format in this directory. The preparation scripts are located in scripts/prepare_datasets
but are not deterministic. For reproducibility, please utilize the provided splits.
As a first step, we utilize a range of foundation models to generate embedding and store them in an .h5
file in the <path_to_embeddings>
directory.
To reproduce all our experiments from the apper use scripts/1_create_embeddings.sh
.
To generate new embeddings manually, run:
python3 create_embeddings.py <path_to_embeddings> <path_to_checkpoints> <path_to_datasets> <dataset_csv> <model_name> <batch_size>
Add new models in src/embeddings/models.py
following the BaseModel
structure.
All our experiments are controlled with a config.json
, defining the experimental setup.
Experiment configurations from our paper are auto-generated using:
python3 scripts/2_create_experiments.py <path_to_experiments_dir> <path_to_embeddings> <path_to_dataset_csvs>
If you would like to run custom experiments modify or create config.json
.
To execute retrieval experiments:
python3 src/retrieval.py <path_to_config.json>
For paper results replication, run:
./scripts/3_run_retrieval.sh
This generates a knn.csv
in each experiment directory for evaluation.
Execute linear probing experiments with:
python3 src/linear_probing.py <path_to_config.json>
Replicate paper results by running:
./scripts/4_run_linear_probing.sh
Evaluate experiments using the following scripts, which save results both in each experiment directory and the parent directory for comparison:
- Retrieval:
python3 src/evaluation/evaluate_retrieval.py <path_to_experiments>
- kNN Classification:
python3 src/evaluation/evaluate_knn_classification.py <path_to_experiments>
- Linear Probing:
python3 src/evaluation/evaluate_linear_probing.py <path_to_experiments>