Giter Club home page Giter Club logo

a-deeply-supervised-image-fusion-network-for-change-detection-in-remote-sensing-images's People

Contributors

geozcx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

a-deeply-supervised-image-fusion-network-for-change-detection-in-remote-sensing-images's Issues

model

模型权重的链接挂了,可以再上一下吗,感谢

Whether I can provide complete train code (keras version) or not, I always input the wrong dimension.entreat

I superimposed the two images into an array and imported the model. Always report errors as follows:

Input 0 of layer block1_pool is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [4, 2, 512, 512, 64]

This is my code: model.fit(train_imag,train_label,batch_size = 4,epochs=100)

`def traindataloard(path1, path2):
img1path = os.listdir(path1)
img2path = os.listdir(path2)
img1group = [input_t1 + name1 for name1 in img1path]
img2group = [input_t2 + name2 for name2 in img2path]
train_img2 = []
for img1n, img2n in zip(img1group, img2group):
train_img1 = []
img1 = cv2.imread(img1n)
img2 = cv2.imread(img2n)

    img1 = np.array(img1)  
    img2 = np.array(img2)

    train_img = [img1, img2]    
    train_img2.append(train_img)
return train_img2`

train_img = traindataloard(input_t1, input_t2)

About "Sigmoid"

In the pytorch version code:

  1. 'Sigmoid' is used when you build DSIFN model, but it's used again in your loss function.

  2. Then, in loss.py, why use 'Sigmoid' function for 'target' ?

Could the raw big images be made publiced?

I want to do some research on change detection in big map. But there are few public change detection dataset of big map. I see the raw big maps are shown in readme. Does it mean that the raw image can be obtained by stitching, or the raw image will be made public in the future?

Network parameter initialization

Hi @GeoZcx, I find a phenomenon that when I initialization of network parameters, the network can't detect any change in Pytorch Version. But when I don't initialize of network parameters the network works well. The follow is the code of initialization

`
def _initialize_weights(self):

    for m in self.modules():
        if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d) or isinstance(m, nn.Linear):
            nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            if m.bias is not None:
                m.bias.data.zero_()
        elif isinstance(m, nn.BatchNorm2d):
            m.weight.data.fill_(1)
            m.bias.data.zero_()

`

question about your data set

The 512x512 resultion masks in the train and val set that you released are all black. Did you train your model by upsampling the masks that resultion is 256x256, or some other methods ?

Spatial_Attention in Pytorch Version

In Pytorch Version, only have one Spatial_Attention,but I think there be 5 spatial_attentions is beeter. Because there are 5 branch, and each branch should have it's own spatial_attention after channel attention. And I found there are 5 spatial_attentions in Keras Version.

image

Channel Attention

Thanks your new idea. There is a discrepancy with the explanation of the paper in Pytorch Version.
image

Q About Result weight files

Hello, I am very interested in your paper, can re-upload the weight files? it looks like already invalidated

The loss func in pytorch version

I found that the loss funcion cannot handle the tuple returned by your model. The code of the PyTorch version may wrong.

The loss may like the following formats

for itr in itr_loss: loss += F.binary_cross_entropy(input, target)

question about training process

Hi, I am confused about your training process. Based on my understanding,you load 'imagenet' pretrained weights on Vgg16, then you freeze Vgg16 parameters and only train DDN part. Is my understanding right?According to the paper, I noticed that the deep feature extraction network (DFEN) and the difference discrimination network (DDN) in the proposed method are independently trained during the training process, but if you just load 'imagenet' pretrained weights on Vgg16, you don't train Vgg16. In addition to this, if you just use pretrained weights, I question the effect of pretrained weights.

About network's output

Sorry, my English is not very good. About your codes in pytorch version, I am comfused about a problem need your help. Why is the convolution output of the last layer of the network single channel? In remote sensing change detection. Generally, there are two types of change and non change. If output only single channel, how do we get the results of change?

cannot reproduce the result of DSIFN on the first dataset described in the paper.

During training, VGG backbone is frozen with pretrained weights by self.features = nn.ModuleList(features).eval() so that only the weight parameters of DDN will be updated.

Environment:

Ubuntu 16.04
Python==3.6.8
PyTorch==1.4
a RTX2080Ti GPU

Hyperparameters setting:

batch_size = 8
initial learning rate=0.0001
adam optimizer

Would you mind releasing the complete training code? Thanks.

How to use pytorch version

Can you share its Demo Or some Model life like I dont know how to use these code i have tried but failed and end up to make a Undesired output plzzzz help

Problem about the output and the loss function

Excuse me, i have a problem about the loss. the problem is that the output pixel of the branch is smaller than the label (such is the branch output is 2828,but the label is 256256),so how to deal with the label to fit the branch output pixel? i want to use downsampling...
looking foward to your reply!

请问怎么获得完整代码

作者你好,在公开代码中的DSFIN脚本中,不太懂model_A,model_B,是用来做什么的,请问有什么方式可以获得完整的代码吗?万分感谢

question about Input channel

When defining the model,generally this is the form:"class Unet(nn.Module): def init(self, input_nbr, label_nbr):". Why is there no choice of input and output channels in your Pytorch codes.
image In your codes, “class DSIFN(nn.Module): def init(self, model_A, model_B):”, model_A and model_B represent what? vgg16_base? ?
Looking forward to your answer, it will be very helpful to me. Thank you!

Regarding training code

Hello, I recently came across your paper. I am unable to find the training code for your model. Can you please share your training code for keras? Can you provide it here or in the google drive link for the dataset?
Thank you!

cannot reproduce the result of DSIFN on the first dataset described in the paper.

During training, VGG backbone is frozen with pretrained weights by self.features = nn.ModuleList(features).eval() so that only the weight parameters of DDN will be updated.

Environment:

Ubuntu 16.04
Python==3.6.8
PyTorch==1.4
a RTX2080Ti GPU

Hyperparameters setting:

batch_size = 8
initial learning rate=0.0001
adam optimizer

Would you mind releasing the complete training code? Thanks.

Problem about the network needs to be independently trained in two steps

Hello, I am very interested in your paper, but I have some problems in understanding your paper and code. Your paper said that the change detection task is divided into two stages. First, the bitemporal image is sent to the pre-trained DFEN (two VGG networks) to generate deep features, and then the extracted deep features are input into DDN to distinguish the changed areas. What do you mean by saying that these two parts should be trained independently? Does it mean to use the feature extraction method in transfer learning to extract features and then directly put them into DDN for training? You did not use the fine-tuning method, right? Hope to hear from you , thank you very much!

question about DSIFN

Hello, what is the role of model_A and model_B or where is its definition, and what should be input? What did they do to "t1_input" and "t2_input" ? Thanks.
image
image

About Evaluation Metrics

感谢您的工作
您在论文使用了CDD数据集,并在其测试集上取得了很好的结果。我想问一下您在计算评估指标的时候,比如用F1-Sore,那应该求出测试集中每张图片(256*256)计算出F1,再求平均;还是求出整个测试集的混淆矩阵,再算F1-Score呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.