Giter Club home page Giter Club logo

2016_person_re-id's Introduction

2016_person_re-id's People

Contributors

joyluo avatar layumi avatar tannd-ds avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2016_person_re-id's Issues

some problem about softmax output

Thanks for sharing a wonderful code! This code don't have any issue and this is my private problem.
For positive pairs, the network final two-dimension FC layer output like [0.57, 0.43] vector (I add a softmax layer after original final FC layer). Even though the network can distinguish positive and negative (because positive pairs most greater than 0.5, negative opposite), for my own project it had better output positive close to 1 (such as 0.95).
So why this code final two-dimension FC layer output like [0.57, 0.43] rather than [0.95, 0.05] as usual softmax layer do.

the way of ResNet-50-Basel

hi @layumi
your paper and code were read carefully. and the ResNet-50-Basel in your paper is higher then others. the question is how to train the baseline. Is the resnet50 with softmaxloss finetuned in the market1501?
and ours is finetune the resnet50 with siamese loss(two losses) using the market1501?
the last question is that the training solver between ResNet-50-Basel and ours are same or not?

thanks for your kind
luo ze

could junkimage affect the rank1?

respectable layumi:
i am very interest in person re-id, and i read the paper and code from your team. and when i evaluation the result of net. the njunk can improve the result of rank1, and i think the real rank1 is cmc(n:end)=1; rather than cmc(n-njunk:end) = 1;
and i am looking forward to your reply.
and the code from 2016_person_re-ID/evaluation/compute_AP_rerank.m
if ~isempty(find(good_image == index(n), 1)) cmc(n-njunk:end) = 1; flag = 1; % good image good_now = good_now+1; end

about training the cuhk03 dataset

Hi, I am very sorry to bother you again.I change the line from fc751Block = dagnn.Conv('size',[1 1 2048 751],'hasBias',true,'stride',[1,1],'pad',[0,0,0,0]); to fc751Block = dagnn.Conv('size',[1 1 2048 1367],'hasBias',true,'stride',[1,1],'pad',[0,0,0,0]); ,And no other changes have been made elsewhere.Then i run 'resnet52_2stream.m' to recreat 'net.mat', after that i run the 'train_id_net_res_2stream.m', but there has an error shows
错误使用 gpuArray/subsref Index exceeds matrix dimension. 出错 vl_nnloss (line 234) t = Xmax + log(sum(ex,3)) - x(ci) ; 出错 dagnn.Loss/forward (line 14) outputs{1} = vl_nnloss(inputs{1}, inputs{2}, [], 'loss', obj.loss, obj.opts{:}) ; 出错 dagnn.Layer/forwardAdvanced (line 85) outputs = obj.forward(inputs, {net.params(par).value}) ; 出错 dagnn.DagNN/eval (line 91) obj.layers(l).block.forwardAdvanced(obj.layers(l)) ; 出错 cnn_train_dag>processEpoch (line 222) net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ; 出错 cnn_train_dag (line 90) [net, state] = processEpoch(net, state, params, 'train',opts) ; 出错 train_id_net_res_2stream (line 34) [net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ;
Can you tell me what's wrong with me?Thank you very much!

about the cuhk03 evaluation

thanks for your evaluation code , but i am a little confused about one line of it. could you tell me what 'ff = ff.ff1' and 'ff =ff.ff2' mean. i don't find where the 'ff , ff1' is.

Use 2016_person_re-ID train DukeMTMC

Hi, Zhedong Zheng;
I used the master to train Duke Data , but the mAP and the r1 is too low. I just change faster.m and crazy.m the image path.

Attempt to execute SCRIPT vl_nnconv as a function:

Hello,

I am using gcc 4.7 and g++ 4.7. and Matlab version Matlab2016a. They are compatible with making the mex files.
I used the command vl_compilenn to compile the files in Matlab folder and making the mex files.
Now mex files of are existed in matlab/mex folder.

I called matlab in test folder and use this command to add the subdirectory and mex files.
addpath(genpath(../matlab))
But when I tried to run the test_gallery_query_crazy.m file it reaches to this error.

Attempt to execute SCRIPT vl_nnconv as a function:

What is the source of this error ?

Thanks in advance.
Sarah

the ratio of classification loss and verification loss is 1/2

dear @layumi
when i train the siamese net, the loss of verification shakes widly. and it is down in a very difficult way to understand.
(1)how about you view of the change(down) of the verification loss
(2)do you have tune the net in other ratio of the two loss, and could you tell me the result
and this is the fig of classification loss and verification loss
clsloss
difloss
and thanks for you kindly

Use same data for training and validation?

Because of the same operation to image's set attribution that function rand_same_class and function rand_diff_class use in training and validation, the model validates with almost the same data from training, which will lead to unreliable accuracy.

Detail:

In cnn_train_day.m, line98~99

[net, state] = processEpoch(net, state, params, 'train',opts) ;
[net, state] = processEpoch(net, state, params, 'val',opts) ;

in each epoch,

  • function processEpoch will do training with opts containing data indexes whose set == 1;
  • and do validation with opts containing data indexes whose set == 2;
  • and do eval in line 219~226
if strcmp(mode, 'train')
    net.mode = 'normal' ;
    net.accumulateParamDers = (s ~= 1) ;
    net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;
else
    net.mode = 'test' ;
    net.eval(inputs) ;
end

the key point is, in processEpoch, line 206

inputs = params.getBatch(params.imdb, batch,opts) ;

both of training and validation use function getBatch to generate inputs data.

and in function getBatch in train_id_net_res_2stream.m, line51~57:

for i=1:batchsize
    if(i<=half)
        batch2(i) = rand_same_class(imdb, batch(i));
    else
        batch2(i) = rand_diff_class(imdb, batch(i));
    end
end

and in rand_same_class.m line 5~8

while(output==index || imdb.images.set(output)~=1)
        selected = randi(numel(list));
        output = list(selected);
end

filter out image whose set is 2, meaning that function rand_same_class doesn't produce test data in validation as well as rand_diff_class.

Summary

Because of the same operation that functionrand_same_class and function rand_diff_class use in training and validation, the model validates with almost the same data from training, which will lead to unreliable accuracy.

Fix Advice

Add a parameter referred to the eval mode to function rand_same_class and rand_diff_class, and filter out image data whose set is 2 in training, filter out image data whose set is 1 in validation. If you confirm this bug, I can post a pull request to fix it.

Undefined function or variable 'net'

@layumi hi,i try to run your code,but there is a mistake,when i run the"train_id_net_res_2stream.m",it shows
`cnn_train_dag: resuming by loading epoch 75
Undefined function or variable 'net'。

wrong cnn_train_dag>loadState (line 406)
net = dagnn.DagNN.loadobj(net) ;

wrong cnn_train_dag (line 67)
[net, state, stats] = loadState(modelPath(start)) ;

wrong train_id_net_res_2stream (line 34)
printf(cnn_train_dag(net, imdb, @GetBatch,opts) );`
can you tell me what's wrong with it,I have tried this problem for several days and have not solved this problem.thank you!

there is an error when i am running the train_id_net_res_2stream

When i run the 'train_id_net_res_2stream ',there is an error shows:
"The subscript index must be a positive integer type or a logical type.
Wrong in train_id_net_res_2stream (line 13)
net.params(net.getParamIndex('fc751f')).learningRate = 0.01;"
My enviroment is win10 +matlab2016.
Can you tell me where is the problem?Thank you.

how to get the result of single query and multi query

hi @layumi
the result of your code is based on single query. single query in you code is that every image in query dateset to get the search result in gallery dateset.
1, am i right about single query?
2, how to get the result of multi query?

and thanks for your kindly

About the share weights?

Hi, @layumi
Thanks a lot for your code.
There is a question which puzzles me. Are the two branches of Resnet in your code share the same weights? I find that you initialize the parameters by the pretrained model and change the learning rate of parameters in each branch. But I haven't found how to make the two branches share the same weights.
Can you help me that?
Thanks!

The variable "dagnn" or class "dagnn.DagNN.loadobj" is not defined.

@layumi my configuration is cpu+matlab 2016a+vs2015+win10.
first,i run 'gpu_compile' to compile the matconvnet,and shows that the compilation is successful.then i want to run 'test_gallery_query_razy',but there was an error shows that:
The variable "dagnn" or class "dagnn.DagNN.loadobj" is not defined error test_gallery_query_crazy (line 8) net = dagnn.DagNN.loadobj(netStruct.net);
i used the method from internet--add 'run /matlab/vl_setupnn.m',but it also have the same error.

some issues about Oxford5k dataset and train order

Hello,thank you for your paper and codes.
I have some questions for you to answer.
1,I have read the paper that mentioned the data set Oxford5k,but I donot konw how to get it and does the experiment utilize ti ?
2.With regard to the experiment, in github, after Installation and Dataset is the Test?Do you omit the training step ?
I'm looking forward to your reply ,thank you !

how to make the activation maps

Dear @layumi
the visualization of the activation maps in your paper is wonderful. and i am very curiosity of how to make it. i got no answer after searching the internet. so please tell me the skill when you have relax time.
thanks a lot.

可视化

老哥,请问要如何得到您A Discriminatively Learned CNN Embedding for Person Re-identification论文那张分类结构的可视化图。老哥有写过python版本的吗?
image

Load data error ?

运行环境:centos7 /matlab2014
运行test2/test_gallery_res.m 的时候,提示data/imagenet-resnet-50-dag.mat Not a binary mat file , try load -ASCII。
运行train_id_net_res_2stream.m的时候,也是同样的问题。
不知为何,求解点

run zzd_evaluation_res_faster.m and got NULL

@layumi
hello
thankyou for the code,but when I run zzd_evaluation_res_faster.m,I cannot got rank and got many NULL.
I check the code and find that the code is compare the query and test images in the market1501 dataset.
but the query and test images is different.
Could you please tell me what is wrong?
Thankyou verymuch!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.