Giter Club home page Giter Club logo

dasnet's Introduction

DASNet: Dual attentive fully convolutional siamese networks for change detection of high-resolution satellite images

Change detection is a basic task of remote sensing image processing. The research objective is to identify the change information of interest and filter out the irrelevant change information as interference factors. Recently, the rise in deep learning has provided new tools for change detection, which have yielded impressive results. However, the available methods focus mainly on the difference information between multitemporal remote sensing images and lack robustness to pseudo-change information. To overcome the lack of resistance in current methods to pseudo-changes, in this paper, we propose a new method, namely, dual attentive fully convolutional Siamese networks (DASNet), for change detection in high-resolution images. Through the dual attention mechanism, long-range dependencies are captured to obtain more discriminant feature representations to enhance the recognition performance of the model. Moreover, the imbalanced sample is a serious problem in change detection, i.e., unchanged samples are much more abundant than changed samples, which is one of the main reasons for pseudo-changes. We propose the weighted double-margin contrastive loss to address this problem by punishing attention to unchanged feature pairs and increasing attention to changed feature pairs. The experimental results of our method on the change detection dataset (CDD) and the building change detection dataset (BCDD) demonstrate that compared with other baseline methods, the proposed method realizes maximum improvements of 2.9% and 4.2%, respectively, in the F1 score.

You can visit the paper via https://ieeexplore.ieee.org/document/9259045/ or arxiv @ https://arxiv.org/abs/2003.03608

The architecture:

Requirements

Most of problems in the issue list are caused by the version of python or pytoch. We have updated the source code to fit new version of pytorch. Hope our repo is useful to you.

Datasets

This repo is built for remote sensing change detection. We report the performance on two datasets.

Directory Structure

File Structure is as follows:

$T0_image_path/*.jpg
$T1_image_path/*.jpg
$ground_truth_path/*.jpg

##################

NOTE: We give an example of the directory structure in the .example and the values of the label images need to be 0 and 1. IF you did not revise it, our model will lost it's mind.

##################

Pretrained Model

The backbone model and pretrained models for CDD and BCDD can be download from [googledriver] [baidudisk] password:86of

Training

cd $CD_ROOT
python train.py

Testing

cd $CD_ROOT
python test.py

Citation

If our repo is useful to you, please cite our published paper as follow:

Bibtex
@article{chen2020dasnet,
    title={DASNet: Dual attentive fully convolutional siamese networks for change detection of high resolution satellite images},
    author={Chen, Jie and Yuan, Ziyang and Peng, Jian and Chen, Li and Huang, Haozhe and Zhu, Jiawei and Lin, Tao and Li, Haifeng},
    journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
    DOI = {10.1109/JSTARS.2020.3037893},
    year={2020},
    type = {Journal Article}
}

Endnote
%0 Journal Article
%A Chen, Jie
%A Yuan, Ziyang
%A Peng, Jian
%A Chen, Li
%A Huang, Haozhe
%A Zhu, Jiawei
%A Lin, Tao
%A Li, Haifeng
%D 2020
%T DASNet: Dual attentive fully convolutional siamese networks for change detection of high resolution satellite images
%B IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
%R 10.1109/JSTARS.2020.3037893
%! DASNet: Dual attentive fully convolutional siamese networks for change detection of high resolution satellite images

dasnet's People

Contributors

lehaifeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dasnet's Issues

Normalization in preprocess of Dataset

Hi,
I have find the code only use the - mean in preprocess dataset ,
that mean you input a tensor with float number like [-128,128 ]
I dont know wether like this

Thans for your reply

There are several mistakes in writing in your paper? (论文中的几处笔误)

Hi,
I've read your paper and I think that the number "0" was missed in the maximum functions of Eq.(5) and Eq.(6). Am I right? (论文中公式5和6中求max的地方忘写零了?)

And I'm also a little confused about your WDMC loss function.
I can't find any weights in "class DContrastiveLoss" in your code. Is this "DContrastiveLoss" the implementation of your WDMC loss?

Thank you~

About Evaluation Metrics

感谢您的工作
您在论文使用了CDD数据集,并在其测试集上取得了很好的结果。我想问一下您在计算评估指标的时候,比如用F1-Sore,那应该求出测试集中每张图片(256*256)计算出F1,再求平均;还是求出整个测试集的混淆矩阵,再算F1-Score呢?
我看您的代码中应该是求出整个测试集的混淆矩阵,不知道我的理解是否正确

how to generate change map

Hi,I'm new to CD task,recently I've read your paper and test the code,as change map shown in your paper,it is binary image,but I can only get distance map according to your description of the method.Could you give some guide?Appreciate your help!
image

The experimental results can't be reproduced.

appreciate your work!

I notice that the following settings in your paper.

  1. you take the DANet with the pre-trained resnet50 for semseg as your basic networks and apply a modified WDMC loss for metric learning.
  2. you don't mention that the data augmentation strategy is used in your experiments.

I reproduced your code on the CDD with the pre-split dataset by Lebedev.

Regularly, we calculate accuracy, precision, recall, f1 for every prediction, and take the average value. However, you calculate the whole TP, TN, FN, FP in the whole test sets. The evaluation process is modified as follows.

pre, rec, acc, f1 are initialized by averageMeter

`
embedding_distance_map = single_layer_similar_heatmap_visual(out_embedding_t0, out_embedding_t1, 'l2')

        p = embedding_distance_map.reshape(256, 256)

        min_, max_ = np.min(p), np.max(p)

        p = (p-min_)/(max_-min_)


        p[p > 0.5] = 1
        p[p < 0.5] = 0

        g = targets.data.cpu().numpy().reshape(256, 256)

        pre.update(precision_score(g.reshape(256 * 256), p.reshape(256 * 256)))
        rec.update(recall_score(g.reshape(256 * 256), p.reshape(256 * 256)))
        acc.update(accuracy_score(g.reshape(256 * 256), p.reshape(256 * 256)))
        f1_sc.update(f1_score(g.reshape(256 * 256), p.reshape(256 * 256)))

`

I modify the evaluation metric and remain the rest settings of the experiments except the batch_size is 36 on my computer.

I got the results.

OA 0.9646501922607422, PRE 0.7079651923928499, REC 0.5782524152913715, F1 0.5973189549732922

which is far away from yours. And the results also underperform than FCN-PP, FC-EF, etc.

I didn't know how you get the binary value from the distance map so I take the half value for seg. Is that right?

How could I reproduce your experiments? : )

If any update on codes, pls let me know.

How to test the pretrained weight

For that I download the BCD_model_best_au.pth...And wish do some small tests about the performance of the net.
But is seems doesn't contain the fc layers related..
Also, would u like to tell us how to use the dataset..(Just put the real and unreal imgs together?)

您好,请问您是否方便分享一下您的测试代码呢?十分感谢!!

作者您好:
我是刚刚接触pytorch和变化检测的新手,关于test文件的问题,我看到您说可以使用train.py的Validate函数来完成test文件,但是我有很多细节还是不太懂,请问您是否可以分享一下您的test文件,十分感谢了!如果您能方便的话,可以发到我的邮箱吗?? zuoyiping$163.com

The evaluation index score is not good enough

I ran the source code you released, output your difference map, and found that the network training under the guidance of the loss function, the obtained features are not separable for finding Euclidean distance. According to the understanding of the paper, the Euclidean distance of the unchanged pixel should be as small as possible, and the distance of the changed pixel should be as large as possible, but I don’t know where the problem occurred. The visualization of the distance is not like this. Cannot distinguish between changed and unchanged pixels. I think the reason for the unsatisfactory evaluation score is here, but I don't know how this problem occurred. I hope you can analyze it, thank you very much.

Convert the heatmap to the result image

Hey, I see that you have output the resulting graph, but the result is heatmap in the test.py,and I want to know how did you convert it to the result image。Hope to get your answer, thank you very much!

share prediction result

Hello, I am very interested in your work. I have a problem in the process of reproducing the code. Can you share the prediction results under the CDD and BCDD data sets...

About the labels in train loop

Hi, dear friends:
I think in the paper the author wanted to convey an idea that we should upsample the embedded feature maps to the same size as labels in order to calculate the loss. However, in the source code, it seems that it downsamples the labels as the same size as embedded faeture maps to calculate the loss. It confuses me, could you please tell me how it goes like this?
image

dict names dont match when Loading the trained best weight

Hi, I noticed that the VGG-16 pretrained weight you used is a bit different from the version I downloaded from the torchvision.
(For example: the VGG in the torchvision, the dict_names are features.0.weight...And you use the conv1.0....)
So would u like to share the pretrained VGG-16 sources you used?

The loss may be negative and not converge

It seems, the loss cannot converge after lots of epoches...(So the dataset and the net may have sth wrong).
Any suggestions about the data?...
Also, the loss has some negative value (which is a bit strange).

CDD dataset

May I ask how to get the CDD dataset? Thanks!

About the thresh, f-score, and label data

hello, I run your code and CDD dataset under the instruction but get bad results. My 'Best Thresh' is always 1, and F-score has no promotion. Could you please tell me why? Thank you very much! Whats' more, I am curious about the label. The range should be set to [0,1] or [0,255]. The initial data is [0,255], without normalizing to [0,1] I get weird loss just like hundreds.
截屏2020-07-10下午2 51 45

RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 3 and 4 at C:\w\1\s\tmp_conda_3.6_095855\conda\conda-bld\pytorch_1579082406639\work\aten\src\TH/generic/THTensor.cpp:603

Traceback (most recent call last):
File "train.py", line 238, in
main()
File "train.py", line 188, in main
for batch_idx, batch in enumerate(train_loader):
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next
data = self._next_data()
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data\dataloader.py", line 856, in _next_data
return self._process_data(data)
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data\dataloader.py", line 881, in _process_data
data.reraise()
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch_utils.py", line 394, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data_utils\collate.py", line 79, in default_collate
return [default_collate(samples) for samples in transposed]
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data_utils\collate.py", line 79, in
return [default_collate(samples) for samples in transposed]
File "C:\ProgramData\Anaconda3\envs\dasnet2\lib\site-packages\torch\utils\data_utils\collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 3 and 4 at C:\w\1\s\tmp_conda_3.6_095855\conda\conda-bld\pytorch_1579082406639\work\aten\src\TH/generic/THTensor.cpp:603

How do I run the code

Who can tell me how to run the code on Pycharm? Is this running on a virtual machine?

Licence.

Hi @lehaifeng ,
Can you please add the license for this code. Currently, it doesn't have any, thus it makes it ambiguous whether we can use it or not.
Thanks

Error loading pretrained weights

Hi,

i tried loading the weights from https://drive.google.com/drive/folders/1iTsmLDCWcNm6odchkpmZY6dSq7dEpQBP

using this code:

model = SiameseNet(norm_flag='l2')
checkpoint = torch.load('/content/CDD_model_best.pth', map_location='cpu')
model.load_state_dict(checkpoint['state_dict'])

but i am getting this error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-67-144de018e26d> in <module>()
----> 1 model.load_state_dict(checkpoint['state_dict'])

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
   1050         if len(error_msgs) > 0:
   1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
   1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
   1054 

RuntimeError: Error(s) in loading state_dict for SiameseNet:
	Missing key(s) in state_dict: "CNN.fc6_1.0.weight", "CNN.fc6_1.0.bias", "CNN.fc7_1.0.weight", "CNN.fc7_1.0.bias", "CNN.fc6_2.0.weight", "CNN.fc6_2.0.bias", "CNN.fc7_2.0.weight", "CNN.fc7_2.0.bias", "CNN.fc6_3.0.weight", "CNN.fc6_3.0.bias", "CNN.fc7_3.0.weight", "CNN.fc7_3.0.bias", "CNN.fc6_4.0.weight", "CNN.fc6_4.0.bias", "CNN.fc7_4.0.weight", "CNN.fc7_4.0.bias", "CNN.embedding_layer.weight", "CNN.embedding_layer.bias". 

Can you help me?

When i run python train.py, i found the following mistake :
ImportError: cannot import name 'PAM_Module'
Can you help me ?
Thank you

About the output's resolution

Hello, I found that the DASNet downsample the input to [33,33] and then interpolate to initial size[256,256] directly, and my result has bad resolution. Would you pleas tell me how to deal it?
Thank you very much!
截屏2020-07-31下午5 53 35
00036的副本

about label and result

appreciate your work!
I want to ask you two questions.
The first is label value is 0 and 1, but the ground truth is 0-255(rgb),I want to know if you use the method which is first change the groundtruth to gray-image,and use thresholding。then,I want to know what is the threshold。
The second is in the test.py,the result is heatmap,I want to know how did you convert it to the result image。
Look forward to hearing from you!

thresh = np.array(range(0, 256)) / 255.0

Hello, what is the function of this code that appears repeatedly in your code? Is 256 the size of the input image? If the size of my input image is 512, do I need to change it to 512? Hope to get your answer, thank you!

Is this the newest code?

Some errors in this code.

Such as:
import model.siameseNet.DASNET as models

and some others ...

can't run directly

About WDMC

Hi, thank you for sharing the code. May I ask which function implements the WDMC loss function?

About the WDMC loss function

I have two questions about the ContrastiveLoss1() loss function in loss.py

  1. I think
    mdist_pos = torch.clamp(dist-self.margin1, min=0.0)
    mdist_neg = torch.clamp(self.margin2-dist, min=0.0)
    should be
    mdist_pos = torch.clamp(self.margin1-dist, min=0.0)
    mdist_neg = torch.clamp(dist-self.margin2, min=0.0)
  2. I think the original code forgot to add the weights mentioned in the paper, so it seems like it should be
    w1 = 1 / 0.147 w2 = 1 / (1 - 0.147) loss_pos = w2 * ((1 - y) * (mdist_pos.pow(2))) # w2小,使网络较不关注unchanged pairs loss_neg = w1 * (y * (mdist_neg.pow(2))) # w1大,使网络重点关注changed pairs

ModuleNotFoundError: No module named 'attention'

When i run the test.py i am having issue with loading the module. The module is in the same directory and the code also seems correct. What is going wrong ? please help.

seNet/dares.py", line 12, in <module>
    from attention import PAM_Module
ModuleNotFoundError: No module named 'attention'

Higher accurancy in LEVIR-CD and CDD datasets.

Recently, I tested your code and replaced the loss with cross-entropy loss. In addition to this, I also change a few last codes in Res50.py. I got good accuracy in LEVIR-CD and CDD datasets. The F1 scores are 90.99 and 96.38, respectively. If you are interested in my work, you can contact me. My email is [email protected] .

关于BCDD数据集

你好,BCDD被划分成了多少数据。只知道有个8:1:1,但是总量是多少呢?如果可以,希望作者可以提供训练用的测试、训练、验证数据集或其脚本

How to set the dataset in CDD, to get some suitable Precision

Hi, I run this work with precision less that 8% in 10 epoch (also it includes some TN results, so this change map is useless....).
And I dont know why for that, and could u please tell me how did u set the dataset.
In my experiments, I used the pretrained model to train the CDD real subset data directly, not used model data (synthesized data). Did u use all of the CDD dataset including model and real?

About the details of train/val/test split of the BCDD set

Greetings! Could you release the details of the train/val/test set of the BCDD set? For example, could you provide the train/val/test lists for each dataset? Because many people may want to make a fair comparison with your proposed method. Thank you very much.

测试过程疑问

您好,我是一名学习变化检测的初学者,在其他网站看到了您的论文,想通过其理解变化检测的过程。我利用您的validate()作为测试代码,将参数resume设为1,cfg.TRAINED_BEST_PERFORMANCE_CKPT=CDD_model_best.pth,对CDD中的Real\subset\val中2998张数据进行测试,但是得到的f1_score只有0.0399,所以想咨询一下,自己是不是有些参数没有正确设置呢?麻烦您了,谢谢~

problems about metric

Why are there 256 precision and recall results for each epoch? Are they output in rows or columns? If we want to get the final precision and recall, we need to add up and average it? Hope to get your answer.

load CDD_model_best.pth fail

image

model = models.SiameseNet(norm_flag='l2')
checkpoint = torch.load(r'F:\DASNet-master\ckpt\CDD_model_best.pth', map_location='cpu')
model.load_state_dict(checkpoint['state_dict'])

keys are not match when i run test.py with the pretrained file: CDD_model_best.pth

Train with my own data

Hello there! Thank you very much for making the code public.
I used my own data set for model training. I used DASNet+DContrastiveLoss. The initial loss was 0.23. After a period of training, NAN appeared. Is DContrastiveLoss equal to WDMC loss? Where is the weight calculation code? In addition, what are the possible reasons for the NAN value?
Thank you!

Which loss

感谢您的工作,用您的数据和代码已取得您论文中的结果。
另,我想请教下您论文设计的loss是在loss.py中的哪一个?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.