Welcome to the Few-shot Satellite Image Classification (OPS-SAT) repository Follow the steps below to get started:
-
Clone the Repository:
git clone https://github.com/ShendoxParadox/Few-shot-satellite-image-classification-OPS-SAT.git
-
Navigate to Repo Root Folder:
cd Few-shot-satellite-image-classification-OPS-SAT
Make sure you have Conda installed on your machine.
# Create a virtual environment with Python 3.9
conda create --name myenv python=3.9
# Activate the virtual environment
conda activate myenv
# Install project dependencies
pip install -r requirements.txt
-
Build Docker Image:
docker build --no-cache -t ops_sat:latest .
-
Run Docker Container:
docker run -it ops_sat
-
Modify Configuration: Edit the
config.json
file as needed:nano config.json
-
Navigate to Source Folder:
cd src/
-
Run OPS-SAT Development Script:
python OPS_SAT_Dev.py
-
Choose W&B Option: Follow the prompts to choose the WandB option during script execution.
-
View Run Results: Navigate to the WandB dashboard to observe the run results.
-
(Another way) Pull and run the following docker image
docker pull ramezshendy/ops_sat:latest docker run -it ramezshendy/ops_sat:latest
For any additional information or troubleshooting, refer to the documentation or contact the repository owner.
- Dataset Name: The OPS-SAT case dataset
- Dataset Variation Description: Augmented Color Corrected Synthetic Variation
- Training/Validation Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/train/
- Test Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/test/
Change the path of the training and test datasets from the available dataset variations in the Data folder.
- Transfer Learning: false
Means that the model will utilize pretraining using imagenet. If true, it will use transfer learning techniques. - Transfer Learning Dataset: landuse
The available transfer learning datasets are: landuse, imagenet, opensurfaces
- Project: OPS-SAT-Thesis-Project
- Input Shape: [200, 200, 3]
- Number of Classes: 8
- Dropout: 0.5
- Output Layer Activation: Softmax
- Model Optimizer: Adam
- Loss Function: FocalLoss
The implemented loss functions to use from are: FocalLoss, SparseCategoricalCrossentropy - Model Metrics: [SparseCategoricalAccuracy]
- Early Stopping:
- Monitor: val_sparse_categorical_accuracy
- Patience: 6
- Model Checkpoint:
- Monitor: val_sparse_categorical_accuracy
- Cross Validation K-Fold: 5
- Number of Epochs: 200
- Batch Size: 4
- Focal Loss Parameters:
- Alpha: 0.2
- Gamma: 2
If loss function is FocalLoss
- Number of Freeze Layers: 5
If transfer learning is true.
- /OPS-SAT-Thesis-Project
- /Data
- /Variation_Synthetic_Generation_color_corrected_Augmentation
- /train
- /Agricultural
- /Cloud
- /Mountain
- /Natural
- /River
- /Sea_ice
- /Snow
- /Water
- /test
- /ops_sat
- /Variation_Augmentation
- /Variation_Original
- /Variation_Synthetic_Generation
- /Variation_Synthetic_Generation_color_corrected
- /src
- OPS_SAT_Dev.py
- color_correction.py
- image_augmentation.py
- Your source code files
- /notebooks
- /models
- best_weights.h5
- fold_1_best_model_weights.h5
- fold_2_best_model_weights.h5
- fold_3_best_model_weights.h5
- fold_4_best_model_weights.h5
- fold_5_best_model_weights.h5
- README.md
- Dockerfile
- config.json
- .gitignore
- requirements.txt