Giter Club home page Giter Club logo

slimsam's Introduction

SlimSAM: 0.1% Data Makes Segment Anything Slim

0.1% Data Makes Segment Anything Slim
Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang
Learning and Vision Lab, National University of Singapore
Paper: [Arxiv]

Updates

  • 🚀 March 22, 2024: Awesome-Efficient-Segment-Anything is now available. Find more efficient SAMs here.
  • 🚀 January 10, 2024: Run SlimSAM in your browser with 🤗 Transformers.js (demo).
  • 🚀 January 9, 2024: Quickly loading using huggingface 🤗 🤗 🤗 .
  • 🚀 January 7, 2024: Release models using uniform local pruning for easier state dict loading.
  • 🚀 December 19, 2023: Release the Colab example for SlimSAM.
  • 🚀 December 11, 2023: Release the training code, inference code and pre-trained models for SlimSAM.

everything

Fast Start 🚀

Quickly loading with Huggingface 🤗:

from PIL import Image
from transformers import SamModel, SamProcessor

model = SamModel.from_pretrained("Zigeng/SlimSAM-uniform-50").to("cuda")
processor = SamProcessor.from_pretrained("Zigeng/SlimSAM-uniform-50")

img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D localization of a window
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
scores = outputs.iou_scores

Introduction

SlimSAM is a novel data-efficient SAM compression method that achieves superior performance with extremely less training data. The essence of SlimSAM is encapsulated in the alternate slimming framework which effectively enhances knowledge inheritance under severely limited training data availability and exceptional pruning ratio. Diverging from prior techniques, our framework progressively compresses the model by alternately pruning and distilling distinct, decoupled sub-structures. Disturbed Taylor pruning is also proposed to address the misalignment between the pruning objective and training target, thereby boosting the post-distillation after pruning.

SlimSAM process

SlimSAM yields significant performance improvements while demanding over 10 times less training data than any other existing compression methods. Even when compared to the original SAM, SlimSAM achieves approaching performance while reducing parameter counts to merely 1.4% (9.1M), MACs to 0.8% (23G), and requiring only 0.1% (10k) of the SAM training data.

Visualization Results

Qualitative comparison of results obtained using point prompts, box prompts, and segment everything prompts are shown.

Box Prompts and Point Prompts

prompt

Quantitative Results

We conducted a comprehensive comparison encompassing performance, efficiency, and training costs with other SAM compression methods and structural pruning methods.

Comparing with other SAM compression methods.

Comparing with other structural pruning methods.

Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

Install with

pip install -e .

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format.

pip install opencv-python pycocotools matplotlib 

Dataset

We use the original SA-1B dataset in our code. See here for an overview of the datastet. The dataset can be downloaded here.

The download dataset should be saved as:

<train_data_root>/
      sa_xxxxxxx.jpg
      sa_xxxxxxx.json
      ......
<val_data_root>/
      sa_xxxxxxx.jpg
      sa_xxxxxxx.json
      ......

To decode a mask in COCO RLE format into binary:

from pycocotools import mask as mask_utils
mask = mask_utils.decode(annotation["segmentation"])

See here for more instructions to manipulate masks stored in RLE format.

Model Checkpoints

The base model of our method is available. To enhance collaboration with our dependency dectection algorithm, we have split the original image encoder's qkv layer into three distinct linear layers: q, k, and v.

Click the links below to download the checkpoints of orginal SAM-B.

The check points of our SlimSAM are avalable. We release two versions, which are SlimSAM-50 (pruning ratio = 50%) and SlimSAM-77 (pruning ratio = 77%).

Click the links below to download the checkpoints for the corresponding pruning ratio.

Global Pruning Models:

Above models can be instantiated by running

import torch
SlimSAM_model = torch.load(<model_path>)
SlimSAM_model.image_encoder = SlimSAM_model.image_encoder.module

def forward(self, x):

    x = self.patch_embed(x)
    if self.pos_embed is not None:
        x = x + self.pos_embed

    for blk in self.blocks:
        x,qkv_emb,mid_emb,x_emb = blk(x)

    x = self.neck(x.permute(0, 3, 1, 2))
    
    return x

import types
funcType = types.MethodType
SlimSAM_model.image_encoder.forward = funcType(forward, SlimSAM_model.image_encoder)
SlimSAM_model.to(device)
SlimSAM_model.eval()

Local Pruning Models:

Above models can be instantiated by running

import torch
from segment_anything import sam_model_registry

model_type = 'vit_p50'
checkpoint = 'checkpoints/SlimSAM-50-uniform.pth'
SlimSAM_model = sam_model_registry[model_type](checkpoint=checkpoint)
SlimSAM_model.to(device)
SlimSAM_model.eval()

Inference

First download SlimSAM-50 model or SlimSAM-77 model for inference

We provide detailed instructions in 'inference.py' on how to use a range of prompts, including 'point' and 'box' and 'everything', for inference purposes.

CUDA_VISIBLE_DEVICES=0 python inference.py

Train

First download a SAM-B model into 'checkpoints/' as the base model.

Step1: Embedding Pruning + Bottleneck Aligning

The model after step1 is saved as 'checkpoints/vit_b_slim_step1_.pth'

CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py  --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs>

Step2: Bottleneck Pruning + Embedding Aligning

The model after step2 is saved as 'checkpoints/vit_b_slim_step2_.pth'

CUDA_VISIBLE_DEVICES=0 python prune_distill_step2.py  --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs> --model_path 'checkpoints/vit_b_slim_step1_.pth' 

You can adjust the training settings to meet your specific requirements. While our method demonstrates impressive performance with just 10,000 training data, incorporating additional training data will further enhance the model's effectiveness

BibTex of our SlimSAM

If you use SlimSAM in your research, please use the following BibTeX entry. Thank you!

@article{chen20230,
  title={0.1\% Data Makes Segment Anything Slim},
  author={Chen, Zigeng and Fang, Gongfan and Ma, Xinyin and Wang, Xinchao},
  journal={arXiv preprint arXiv:2312.05284},
  year={2023}
}

Acknowledgement

SAM (Segment Anything) [bib]
@article{kirillov2023segany,
  title={Segment Anything}, 
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}
Torch Pruning (DepGraph: Towards Any Structural Pruning) [bib]
@inproceedings{fang2023depgraph,
  title={Depgraph: Towards any structural pruning},
  author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={16091--16101},
  year={2023}
}

slimsam's People

Contributors

czg1225 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

slimsam's Issues

evaluate algorithms such as SAM, EfficientSAM, etc.

Hi, I found your work,it's great.

I am a newbie having some difficulties in reproducing your experiment and would like to seek your help.

I found that you did a comparison experiment in your paper, using the SA-1B dataset, and I successfully reproduced the training and validation results of SlimSAM in it.
image

However, since you didn't release the validation scripts for the time being, I had to follow your experimental setup(https://github.com/czg1225/SlimSAM/issues/5#issuecomment-1875436503) and write my own scripts to validate FastSAM, MobileSAM, EfficientSAM and other algorithms on SA-1B, but I couldn't reproduce the results listed in your table very well, especially the EfficientSAM(only 28%) and EdgeSAM algorithms.

I wonder if it would be convenient for you to put out the validation scripts for these comparison algorithms for learning purposes, I would appreciate it.Thank you very much!

AttributeError: 'DataParallel' object has no attribute 'img_size'

Hi,I'm using torch2trt for model conversion, and I'm getting the following error when converting .pth to .engine, but previously converting another network's .pth worked fine, is this due to network structure or parameter mismatch or something like that when I'm training the model?

In addition, it should be mentioned that the problematic .pth file was pruned, could it be that the pruning operation resulted in missing or null parameters and hence the error? Since the trained .pth file behaves normally in the inference operation, and the problem occurs only in the model transformation, is it due to the fact that the training model and the model transformation do not have the same stringent requirements for the parameters and other contents in the .pth file?

Traceback (most recent call last):
File "convert-sam-trt.py", line 90, in
model_trt = torch2trt(model, [batched_input, multimask_output], fp16_mode=True,strict_type_constraints=True)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8.egg/torch2trt/torch2trt.py", line 558, in torch2trt
outputs = module(*inputs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in forward
input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in
input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 171, in preprocess
padh = self.image_encoder.img_size - h
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DataParallel' object has no attribute 'img_size'

Prune ratios for each step

Thank you for your great work! Would you mind sharing the prune ratios for each step? I don't find them in the paper or code. Thanks!

Questions about multi-GPU training

Hi, When I use your code for training, for example prune_distill_step1.py, I use the command CUDA_VISIBLE_DEVICES=0,1 python prune_distill_step1.py --traindata_path < train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs>.

I didn't change the content of the code in the file(python prune_distill_step1.py), except for setting batchsize=8. However, I found the following picture, and I didn't succeed in training with multiple GPU cards. And it will report insufficient GPU memory. But I did find the part of your code where you write about using multiple GPUs for parallel training likes
model.image_encoder = torch.nn.DataParallel(model.image_encoder) .

Is there a possible problem with the code or am I missing some setting? How should I approach multi-GPU training? Looking forward to your reply, thank you very much!

image

Low IOU when trained with self captured data.

Thanks for your great work. But when I try to train with my own data. The IOU score seems too low. Here is the log.

CUDA visible devices: 1                                                         
CUDA Device Name: NVIDIA GeForce RTX 4090                                       
===========================Parameter Settings===========================                                                                                        
Pruning Ratio: 0.5                                                              
VIT num_heads: 12                       
norm_type: mean                                                                 
imptype: Disturb                        
global: False                                                                   
learning rate: 0.0001
global: False                                                                                                                                           [0/1867]
learning rate: 0.0001                   
a_weight: 0.5                           
round_to 12                             
TRAIN_SIZE 7825 VAL_SIZE 200 GRAD_SIZE 1000 Epochs 20                           
===========================Pruning Start===========================             
/home/user/workspace/SlimSAM-master/torch_pruning/dependency.py:639: UserWarning: Unwrapped parameters detected: ['neck.3.bias', 'neck.1.bias', 'pos_embed', '
neck.1.weight', 'neck.3.weight'].       
 Torch-Pruning will prune the last non-singleton dimension of a parameter. If you wish to customize this behavior, please provide an unwrapped_parameters argume
nt.                                     
  warnings.warn(warning_str)            
vit_b Pruning:                          
  Params: 89578240 => 45116800          
  Macs: 368858711040.0 => 185712844800.0                                        
  Output:                               
torch.Size([1, 256, 64, 64])            
torch.Size([600, 14, 14, 768])                                                  
torch.Size([12, 64, 64, 768])           
torch.Size([12, 64, 64, 3072])                                                  
torch.Size([25, 64, 64, 384])                                                   
------------------------------------------------------                          
                                        
save checkpoint                         
epoch: 0                                                                                                                                                        
IOU: 0.00037980064841486576 Best IOU 0.00037980064841486576                     
epoch: 1                                                                                                                                                        
IOU: 0.0003555969132073369 Best IOU 0.00037980064841486576                      
save checkpoint                                                                                                                                                 
epoch: 2                                                                        
IOU: 0.0004798856262954162 Best IOU 0.0004798856262954162                       
epoch: 3                                                                        
IOU: 0.00038100686134219785 Best IOU 0.0004798856262954162                      
epoch: 4                                                                        
IOU: 0.00033190775964380326 Best IOU 0.0004798856262954162                                                                                                      
epoch: 5                                                                                                                                                        
IOU: 0.00034291165492228654 Best IOU 0.0004798856262954162                      
Epoch 00007: reducing learning rate of group 0 to 5.0000e-05.                   
epoch: 6                                                                                                                                                        
IOU: 0.00033288924349753746 Best IOU 0.0004798856262954162

By the way, my mask files are transformed to COCO RLE format from the origin labeled files which only include polygons. I transformed them first to the binary masks and then to the COCO RLE format. So the transformed JSON files don't have point_coords in the dict as the SA-1B dataset. So I altered the code in the prune_distill_step1.py. Is that the key reason?

                                for example in dict_data:

                                    sub_count += 1

                                    # input_point = np.array(example['point_coords'])
                                    # input_label = np.array([1])

                                    mask = mask_utils.decode(example["segmentation"])
                                    
                                    # point_coords = transform.apply_coords(input_point, original_image_size)
                                    # coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=device)
                                    # labels_torch = torch.as_tensor(input_label, dtype=torch.int, device=device)
                                    # coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :]
                                    # points = (coords_torch, labels_torch)

                                    # Model inference
                                    image_embedding,_,_,_,_ = model.image_encoder(input_image)
                                    sparse_embeddings, dense_embeddings = model.prompt_encoder(
                                        points=None, # points,
                                        boxes=None,
                                        masks=None,
                                    )

Questions about Conversion of torch model to tensorRT model

Hi,The following error occurs when performing the conversion of a torch model to a tensorRT model:TypeError: forward() missing 1 required positional argument: 'multimask_output'.
But I trained the model exactly according to the readme.
Could you please help me out with this?

(sam0) jetson@ubuntu:~/Workspace/aicam/ircamera$ python3 convert-sam-trtpth.py
Traceback (most recent call last):
File "convert-sam-trtpth.py", line 14, in
model_trt = torch2trt(model, [x], fp16_mode=True)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8.egg/torch2trt/torch2trt.py", line 558, in torch2trt
outputs = module(*inputs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
TypeError: forward() missing 1 required positional argument: 'multimask_output'

License

Under what license are the model and code released?

Batch Prompting for Multiple Boxes

Hi @czg1225

Thanks for this repo!

Can you please tell how can I do batch prompting by giving multiple boxes as prompts to the model?

Right now, predict function takes one numpy array of length 4

Also can you please clarify that for custom dataset training, does each image file should have a corresponding json or a single json for the whole folder is enough since I am getting this error. I have se gradsize to be 100 and 1000. My training images are 1306 and validation are 293.

Screenshot 2024-03-19 111709

The MACs result doesn't contain q@k and attn@V

The calculation workload of matrix multiplication in the attention modules(qk and attnV) which is nonnegligible, is not included in torch_pruning.utils.count_ops_and_params。
torchprofile can calculate the macs of q@k and k@v.

Question about del_pos_init and get_pos_init

Hi! Could you please tell me what is the meaning of del_pos_init and get_pos_init? There is no such operations in the original torch_pruning when pruning ViT or Swin transformer. Why is positional embedding removed here and what is the effect of this on pruning?

AttributeError: 'GELU' object has no attribute 'approximate'

Hi 👋 thanks for the project ❤ I got an error when I tried running inference.py (torch 2.1.0+cu118)

CUDA visible devices: 1
CUDA Device Name: Tesla T4
model_path: checkpoints/SlimSAM-77.pth
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-19-14955c5395e4>](https://localhost:8080/#) in <cell line: 165>()
    164 
    165 if __name__ == '__main__':
--> 166     test_model()

15 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
   1693             if name in modules:
   1694                 return modules[name]
-> 1695         raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
   1696 
   1697     def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'GELU' object has no attribute 'approximate'

Is there a way to fix it?

About GPU usage during inference

Hi, thank you for sharing. I found that after pruning, the model size of slimsam-77 is only 38M, which is the same as the model size of edgesam, but the GPU usage of slimsam is still very high, 3071MiB, while that of edgesam is only 433MiB. Why does this happen? I don't quite understand the technical principle.

list index out of range

I'm using the dataset from roboflow. Can you help me?

!CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py --traindata_path "/kaggle/working/Crop-Fields-LOD-13-14-15-4/train/_annotations.coco.json" --valdata_path "/kaggle/working/Crop-Fields-LOD-13-14-15-4/valid/_annotations.coco.json" --trainsize 480 --valsize 126 --prune_ratio 0.3 --epochs 20

===========================Parameter Settings===========================
Pruning Ratio: 0.3
VIT num_heads: 12
norm_type: mean
imptype: Disturb
global: False
learning rate: 0.0001
a_weight: 0.5
round_to 12
TRAIN_SIZE 480 VAL_SIZE 126 GRAD_SIZE 1000 Epochs 20
Traceback (most recent call last):
File "/kaggle/working/SlimSAM/prune_distill_step1.py", line 295, in
train_model()
File "/kaggle/working/SlimSAM/prune_distill_step1.py", line 118, in train_model
batch = next(grad_iter)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 634, in next
data = self._next_data()
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/opt/conda/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/kaggle/working/SlimSAM/load_sam_json.py", line 39, in getitem
image = cv2.imread(self._image_paths[index])
IndexError: list index out of range

https://universe.roboflow.com/cropfields/crop-fields-lod-13-14-15/dataset/4

Is SlimSAM compatible with SAM?

Hi,

Can we plug in the SlimSAM weights into SAM (by recombining the q, k, v weights into a single matrix per layer)?

If yes, then SlimSAM could be ported easily to the 🤗 hub. Currently I'm getting errors like:

size mismatch for vision_encoder.layers.7.mlp.lin2.bias: copying a param with shape torch.Size([168]) from checkpoint, the shape in current model is torch.Size([768]).
        size mismatch for vision_encoder.layers.8.layer_norm1.weight: copying a param with shape torch.Size([168]) from checkpoint, the shape in current model is torch.Size([768]).

It seems that the dimensions are different per layer based on the pruning. Any way to load such a state dict in PyTorch?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.