This repository contains the code to reproduce the results presented in the paper "Deep Clustering Evaluation: How to Validate Internal Validation Measures".
Run the following command to install the required dependencies:
pip install -r requirements.txt
To replicate the results reported in the paper using the calculated measure values:
-
Download all the calculated internal measure values from the Google Drive and save them to a local folder.
-
Run the ACE evaluation script.
To run the default setting, simply execute the following command:
python ACE.py
For more settings, run
python ACE.py --cl_method <method> --rank_method <method> --eps <value> --filter_alpha <value> --graph_alpha <value>
--cl_method
: Clustering method ('hdbscan', 'dbscan').--rank_method
: Link analysis algorithm ('pr', 'hits').--eps
: (Default: 0.05) Epsilon parameter for DBSCAN.--filter_alpha
: (Default: 0.05) Family-wise error rate (FWER) for the Dip test.--graph_alpha
: (Default: 0.05) FWER for creating the graph.
Download all the original datasets used to run and evaluate deep clustering algorithms from the JULE repository.
Ensure all downloaded datasets are stored in the scripts/datasets
folder. Navigate to the scripts
folder and run the following command to get the NMI and ACC values:
python get_truth.py --dataset <DATASET> --task <TASK>
Example for the JULE hyperparameter experiment on the COIL-20 dataset:
python get_truth.py --dataset COIL-20 --task jule
All scripts to generate internal measure values for the evaluation are in the scripts/embedded_metric
folder. Since some internal measure values can only be obtained through R packages, both R and Python scripts are used.
-
Calculate Measure Values:
embedded_data.py
: Calculates measure values for the four internal measures reported in the main paper and prepares intermediate inputs for the R script.embedded.r
: Calculates the values for the remaining measures.collect_embedded_metric.py
: Post-processes the calculated values.
-
Generate Shell Scripts for Slurms:
make.py
: Generates shell scripts for submission to Slurms, providing a reference for users on generating their submission scripts.
Scripts for generating internal measure values used for the evaluation are in the scripts/raw_metric
folder. Both R and Python scripts are utilized.
-
Calculate Measure Values:
get_raw.py
: Calculates measure values for the four internal measures reported in the main paper and prepares intermediate inputs for the R script.getraw.r
: Calculates the values for the remaining measures.collect_raw.py
: Post-processes the calculated values.
-
Generate Shell Scripts for Slurms:
make.py
: Generates shell scripts for submission to Slurms, providing a reference for users on generating their submission scripts.
Scripts for performing the Dip test on embedding data obtained from JULE and DEPICT are in the scripts/dip
folder.
Scripts for the selection of checkpoints and performing the Dip test on embedding data obtained from JULE and DEPICT are in the scripts/DeepCluster
folder.