Giter Club home page Giter Club logo

person_search's Introduction

Person Search Project

This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is modified from the py-faster-rcnn written by Ross Girshick.

Installation

  1. Clone this repo recursively
git clone --recursive https://github.com/ShuangLI59/person_search.git
  1. Build Caffe with python layers and interface

We modified caffe based on Yuanjun's fork, which supports multi-gpu and memory optimization.

Apart from the official installation prerequisites, we have several other dependencies:

  • cudnn-v5.1
  • 1.7.4 < openmpi < 2.0.0
  • boost >= 1.55 (A tip for Ubuntu 14.04: sudo apt-get autoremove libboost1.54* then sudo apt-get install libboost1.55-all-dev)

Then compile and install the caffe with

cd caffe
mkdir build && cd build
cmake .. -DUSE_MPI=ON -DCUDNN_INCLUDE=/path/to/cudnn/include -DCUDNN_LIBRARY=/path/to/cudnn/lib64/libcudnn.so
make -j8 && make install
cd ../..

Please refer to this page for detailed installation instructions and troubleshooting.

  1. Build the Cython modules

Install some Python packages you might not have: Cython, python-opencv, easydict (>=1.6), PyYAML, protobuf, mpi4py. Then

cd lib && make && cd ..

Demo

Download our trained model to output/psdb_train/resnet50/, then

python2 tools/demo.py --gpu 0

Or you can use CPU only by setting --gpu -1.

Demo

Experiments

  1. Request the dataset from shuang.li[at]utoronto(dot)ca or tong.xiao.work[at]gmail.com (academic only). Then
experiments/scripts/prepare_data.sh /path/to/the/downloaded/dataset.zip
  1. Download an ImageNet pretrained ResNet-50 model to data/imagenet_models.

  2. Training with GPU=0

experiments/scripts/train.sh 0 --set EXP_DIR resnet50

It will finish in around 18 hours, or you may directly download a trained model to output/psdb_train/resnet50/

  1. Evaluation

    By default we use 8 GPUs for faster evaluation. Please adjust the experiments/scripts/eval_test.sh with your hardware settings. For example, to use only one GPU, remove the mpirun -n 8 in L14 and change L16 to --gpu 0.

    experiments/scripts/eval_test.sh resnet50 50000 resnet50

    The result should be around

    search ranking:
      mAP = 75.47%
      top- 1 = 78.62%
      top- 5 = 90.24%
      top-10 = 92.38%
  2. Visualization

    The evaluation will also produce a json file output/psdb_test/resnet50/resnet50_iter_50000/results.json for visualization. Just copy it to vis/ and run python2 -m SimpleHTTPServer. Then open a browser and go to http://localhost:8000/vis.

    Visualization Webpage

Citation

@inproceedings{xiaoli2017joint,
  title={Joint Detection and Identification Feature Learning for Person Search},
  author={Xiao, Tong and Li, Shuang and Wang, Bochao and Lin, Liang and Wang, Xiaogang},
  booktitle={CVPR},
  year={2017}
}

Repo History

The first version of our paper was published in 2016. We have made substantial improvements since then and published a new version of paper in 2017. The original code was moved to branch v1 and the new code has been merged to master. If you have checked out our code before, please be careful on this and we recommend clone recursively into a new repo instead.

person_search's People

Contributors

cysu avatar ningzhou avatar shuangli59 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

person_search's Issues

faster_rcnn_test.pt is not found

Hi, I try to run the demo.py but it shows the faster_rcnn_test.pt is not found. May I know where can I get the file? The pascal_voc directory does not exist in person_search/models/
screenshot from 2016-11-16 16 12 40

What is the number of training images inside train.mat file?

Hi, I had wrote a matlab program to convert your train.mat file into xml format for other cnn model training. The output of my matlab program shows 9616 training images but your journal mentioned there are 11206 training images. Can you provide the list of training images (something like pool.mat)? so that I can debug my matlab program more easily.

Improve the recognition efficiency

I got about 0.8 seconds for an image, so what can I do to improve the recognition efficiency, except for the increasing in the number of GPU?

ValueError: Array contains NaN or infinity.

When I run the following codes using the download models.
python2 tools/test_net.py --gpu 0
--gallery_def models/psdb/VGG16/test_gallery.prototxt
--probe_def models/psdb/VGG16/test_probe.prototxt
--net output/psdb_train/VGG16_iter_100000.caffemodel
--cfg experiments/cfgs/train.yml
--imdb psdb_test

It shows that

/usr/lib/python2.7/dist-packages/sklearn/metrics/metrics.py:599: RuntimeWarning: invalid value encountered in true_divide
recall = tps / tps[-1]
Traceback (most recent call last):
File "/home/duan/person_search-master/tools/test_net.py", line 218, in
evaluate(protoc, imdb.image_index, output_dir, args.gallery_size == 0)
File "/home/duan/person_search-master/tools/test_net.py", line 111, in evaluate
ap = average_precision_score(y_true, y_score) * recall_rate
File "/usr/lib/python2.7/dist-packages/sklearn/metrics/metrics.py", line 314, in average_precision_score
return auc(recall, precision)
File "/usr/lib/python2.7/dist-packages/sklearn/metrics/metrics.py", line 172, in auc
x, y = check_arrays(x, y)
File "/usr/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 233, in check_arrays
_assert_all_finite(array)
File "/usr/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 27, in _assert_all_finite
raise ValueError("Array contains NaN or infinity.")
ValueError: Array contains NaN or infinity.

thanks~

Have troble making Caffe in the project

Thank you for your work! I want to try this demo on the server,but I cannot build the caffe.
I encountered an error when the step "make -j8 && make install"is performed, and the error information is
"../lib/libcaffe.so: undefined reference to google::protobuf::io::CodedInputStream::ReadTagFallback(unsigned int)' ../lib/libcaffe.so: undefined reference to google::protobuf::internal::MergeFromFail(char const*, int)'
../lib/libcaffe.so: undefined reference to google::protobuf::internal::RepeatedPtrFieldBase::InternalExtend(int)' ../lib/libcaffe.so: undefined reference to google::protobuf::internal::ArenaStringPtr::AssignWithDefault(std::string const*, google::protobuf::internal::ArenaStringPtr)'
collect2: error: ld returned 1 exit status
make[2]: *** [examples/cpp_classification/classification] Error 1
make[1]: *** [examples/CMakeFiles/classification.dir/all] Error 2
make: *** [all] Error 2"
Hope you can provide some help. Thank you!

the mAP = 69.64% and top-1 = 73.17%

I get the caffemodel from the google drive "VGG16_iter_100000.caffemodel", and get the mAP and top-1 is different from your paper. Do you propose a new method? thx...

stuck at the make process

Hello Xiao
I met a problem when I run make -j8 && make install at the terminal, it stuck and showed the following problem
.../person_search/caffe/src/caffe/data_transformer.cpp:438:51: error: 'CV_INTER_NN' was not declared in this scope
.../person_search/caffe/src/caffe/data_transformer.cpp: In member function 'void caffe::DataTransformer<Dtype>::Transform(const cv::Mat&, caffe::Blob<Dtype>*)':
.../data_transformer.cpp:681:84: error: 'CV_INTER_CUBIC' was not declared in this scope
make[2]: *** [src/caffe/CMakeFiles/caffe.dir/data_transformer.cpp.o] Error 1
make[2]: *** waiting for unfinished jobs....
make[2]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: ***[all] Error 2
I don't know if it's because there are two caffe on the server I run, but the caffe is put under the file'person_search', I have installed cudnn v5.1 and openmpi 1.8.8 . But I didn't modify the caffe after downloading it. Or is it becuase some packages I haven't installed?

the classification loss is halt when the dataset of your paper is trained in other model

Dear Tong xiao:
thank for you dataset.
and when i use the dataset to train other net, the loss of classification can not be down normally.
because you are familiar with the dataset, and the paper named《End-to-End Deep Learning for Person Search》has mention this "When combined together, these two issues make it extremely difficult for the net to receive proper gradients. In practice, we found that the training loss would not decrease if we directly finetune the network from the ImageNet pre-trained VGG16 model."
could you give me some idea if you have time, thanks very much.

给跪了

这也是大神你写的啊,,,

Ask for help

From the experiments/cfgs/resnet50.yml, may I know what is the difference between IMS_PER_BATCH and BATCH_SIZE?

IDNet

Hi @Cysu and @ShuangLI59 ,

I find your work "End-to-End Deep Learning for Person Search" is very interesting and trying to reproduce the result with both joint and Deep+IDNet settings. However, I have some questions about IDNet:

  1. For identity classification, does it use cropped pedestrian image (resized to square?) as input image, or uses the whole image (with ground truth bbox for cropped region) as input?
  2. In IDNet's output, the number of class is 5532 ids or 5533 (ids + background)?
  3. I think training IDNet (only has several samples per each person id) is hard, does it also need the pretrain method and RSS layer mentioned in paper?

Many thanks for your elaboration, I really appreciate it.

Regards,
Dong Li

thrust/device_vector.h file is missing

May I know the "thrust/device_vector.h" file located at which directory? I search through the whole project file but cannot find the "thrust/device_vector.h" file.
The "person_search/caffe-fast-rcnn/src/caffe/layers/cudnn_softmax_layer.cpp" file fail to run #include "thrust/device_vector.h" when I try to make pycaffe

screenshot from 2016-10-26 11 13 31

undefined symbol: sqlite3_column_table_name

When training the model, I met the problem below:

  from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
ImportError: /usr/lib/libgdal.so.1: undefined symbol: sqlite3_column_table_name

But py-faster-rcnn can run properly in my device.

about the train net question

Q1.
In the roi-data layer, the {param_str: "'num_classes': 2\n'bg_aux_label': 5532"},
but in the /lib/roi_data_layer/layer.py, I'm not find the reference about the bg_aux_label. I want to know where use the param bg_aux_label.
Q2.
compared with faster rcnn code, the
layer {
name: "silence"
type: "Silence"
bottom: "labels"
}
is Added, so I want to know the function of this layer in the person_search project

thanks for your explanation

dataset

I have sent an email to xiaotong, But I did not receive a reply. Can someone send me a dataset?

person search project failed to compile caffe with cudnn

I met an issue about compiling this caffe branch with cudnn. as mentioned in person search wiki. it used cudnn-5.1, but compiling this caffe branch failed with both cuda 8.0's and cuda7.5's cudnn 5.1. what about the exact cudnn version for compiling this caffe branch.

the below is a part of error message,
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/blob.cpp
CXX src/caffe/common.cpp
CXX src/caffe/data_transformer.cpp
CXX src/caffe/layer_factory.cpp
CXX src/caffe/layers/absval_layer.cpp
CXX src/caffe/layers/accuracy_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/base_conv_layer.cpp
CXX src/caffe/layers/base_data_layer.cpp
CXX src/caffe/layers/batch_reduction_layer.cpp
CXX src/caffe/layers/bias_layer.cpp
CXX src/caffe/layers/bn_layer.cpp
CXX src/caffe/layers/bnll_layer.cpp
CXX src/caffe/layers/concat_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
CXX src/caffe/layers/cudnn_bn_layer.cpp
CXX src/caffe/layers/conv_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/cudnn_pooling_layer.cpp
src/caffe/common.cpp:146:1: error: ‘Caffe’ does not name a type
Caffe::Caffe()
^
src/caffe/common.cpp:177:1: error: ‘Caffe’ does not name a type
Caffe::~Caffe() {
^
src/caffe/common.cpp:184:6: error: ‘Caffe’ has not been declared
void Caffe::set_random_seed(const unsigned int seed) {
^
src/caffe/common.cpp: In function ‘void set_random_seed(unsigned int)’:
src/caffe/common.cpp:187:11: error: ‘Get’ was not declared in this scope
if (Get().curand_generator_) {
^
In file included from ./include/caffe/common.hpp:34:0,
from src/caffe/common.cpp:5:
src/caffe/common.cpp:188:70: error: ‘curand_generator’ was not declared in this scope
CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(curand_generator(),
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro ‘CURAND_CHECK’
curandStatus_t status = condition;
^
src/caffe/common.cpp:190:60: error: ‘curand_generator’ was not declared in this scope
CURAND_CHECK(curandSetGeneratorOffset(curand_generator(), 0));
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro ‘CURAND_CHECK’
curandStatus_t status = condition;
^
src/caffe/common.cpp:199:7: error: ‘Get’ was not declared in this scope
Get().random_generator_.reset(new RNG(seed));
^
src/caffe/common.cpp:199:37: error: expected type-specifier before ‘RNG’
Get().random_generator_.reset(new RNG(seed));
^
src/caffe/common.cpp: At global scope:
src/caffe/common.cpp:202:6: error: ‘Caffe’ has not been declared
void Caffe::SetDevice(const int device_id) {

fail to clone

Submodule path 'caffe': checked out 'aed38841ae4282d90575186b87c3287620913722'
what does it mean?

Error when running demo

When I try
python2 tools/demo.py --gpu 0
to run the demo, I get the following error:
person_search/caffe/build2/lib/libcaffe.so: undefined symbol: ompi_mpi_double

The MPI_Comm_rank() function was called before MPI_INIT was invoked.

Hi, I tried to run the training but face the following error. It complaint the MPI_Comm_rank() function was called before MPI_INIT was invoked.

ubuntu@ubuntu-S2600CW:~/Downloads/person_search$ experiments/scripts/train.sh 0 --set EXP_DIR resnet50

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
  • GPU_ID=0
  • NET=resnet50
  • DATASET=psdb
  • array=($@)
  • len=4
  • EXTRA_ARGS='--set EXP_DIR resnet50'
  • EXTRA_ARGS_SLUG=--set_EXP_DIR_resnet50
  • case $DATASET in
  • TRAIN_IMDB=psdb_train
  • TEST_IMDB=psdb_test
  • PT_DIR=psdb
  • ITERS=50000
    ++ date +%Y-%m-%d_%H-%M-%S
  • LOG=experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50.txt.2017-02-28_19-27-37
  • exec
    ++ tee -a experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50.txt.2017-02-28_19-27-37
  • echo Logging output to experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50.txt.2017-02-28_19-27-37
    Logging output to experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50.txt.2017-02-28_19-27-37
  • python2 tools/train_net.py --gpu 0 --solver models/psdb/resnet50/solver.prototxt --weights data/imagenet_models/resnet50.caffemodel --imdb psdb_train --iters 50000 --cfg experiments/cfgs/resnet50.yml --rand --set EXP_DIR resnet50
    Called with args:
    Namespace(cfg_file='experiments/cfgs/resnet50.yml', gpu='0', imdb_name='psdb_train', max_iters=50000, pretrained_model='data/imagenet_models/resnet50.caffemodel', previous_state=None, randomize=True, set_cfgs=['EXP_DIR', 'resnet50'], solver='models/psdb/resnet50/solver.prototxt')
    Using config:
    {'DATA_DIR': '/home/ubuntu/Downloads/person_search/data',
    'DEDUP_BOXES': 0.0625,
    'EPS': 1e-14,
    'EXP_DIR': 'resnet50',
    'GPU_ID': 0,
    'MATLAB': 'matlab',
    'MODELS_DIR': '/home/ubuntu/Downloads/person_search/models/pascal_voc',
    'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]),
    'RNG_SEED': 3,
    'ROOT_DIR': '/home/ubuntu/Downloads/person_search',
    'TEST': {'BBOX_REG': True,
    'HAS_RPN': True,
    'MAX_SIZE': 1000,
    'NMS': 0.4,
    'PROPOSAL_METHOD': 'selective_search',
    'RPN_MIN_SIZE': 16,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POST_NMS_TOP_N': 300,
    'RPN_PRE_NMS_TOP_N': 6000,
    'SCALES': [600],
    'SVM': False},
    'TRAIN': {'ASPECT_GROUPING': True,
    'BATCH_SIZE': 128,
    'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
    'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
    'BBOX_NORMALIZE_TARGETS': True,
    'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
    'BBOX_REG': True,
    'BBOX_THRESH': 0.5,
    'BG_THRESH_HI': 0.5,
    'BG_THRESH_LO': 0.0,
    'FG_FRACTION': 0.25,
    'FG_THRESH': 0.5,
    'HAS_RPN': True,
    'IMS_PER_BATCH': 1,
    'MAX_SIZE': 1000,
    'PROPOSAL_METHOD': 'gt',
    'RPN_BATCHSIZE': 256,
    'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'RPN_CLOBBER_POSITIVES': False,
    'RPN_FG_FRACTION': 0.5,
    'RPN_MIN_SIZE': 16,
    'RPN_NEGATIVE_OVERLAP': 0.3,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POSITIVE_OVERLAP': 0.7,
    'RPN_POSITIVE_WEIGHT': -1.0,
    'RPN_POST_NMS_TOP_N': 2000,
    'RPN_PRE_NMS_TOP_N': 12000,
    'SCALES': [600],
    'SNAPSHOT_INFIX': '',
    'SNAPSHOT_ITERS': 10000,
    'USE_FLIPPED': True,
    'USE_PREFETCH': False},
    'USE_GPU_NMS': True}
    *** The MPI_Comm_rank() function was called before MPI_INIT was invoked.
    *** This is disallowed by the MPI standard.
    *** Your MPI job will now abort.

Fail to compile caffe

\u2018 and \u2019 means ' '
\u2018 and \u2019是引号,复制过来的时候显示错误
CXX src/caffe/common.cpp
src/caffe/common.cpp:146:1: error: ’Caffe does not name a type
Caffe::Caffe()
^
src/caffe/common.cpp:177:1: error: Caffe does not name a type
Caffe::~Caffe() {
^
src/caffe/common.cpp:184:6: error: \u2018Caffe\u2019 has not been declared
void Caffe::set_random_seed(const unsigned int seed) {
^
src/caffe/common.cpp: In function \u2018void set_random_seed(unsigned int)\u2019:
src/caffe/common.cpp:187:11: error: \u2018Get\u2019 was not declared in this scope
if (Get().curand_generator_) {
^
In file included from ./include/caffe/common.hpp:34:0,
from src/caffe/common.cpp:5:
src/caffe/common.cpp:188:70: error: \u2018curand_generator\u2019 was not declared in this scope
CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(curand_generator(),
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:190:60: error: \u2018curand_generator\u2019 was not declared in this scope
CURAND_CHECK(curandSetGeneratorOffset(curand_generator(), 0));
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:199:7: error: \u2018Get\u2019 was not declared in this scope
Get().random_generator_.reset(new RNG(seed));
^
src/caffe/common.cpp:199:37: error: expected type-specifier before \u2018RNG\u2019
Get().random_generator_.reset(new RNG(seed));
^
src/caffe/common.cpp: At global scope:
src/caffe/common.cpp:202:6: error: \u2018Caffe\u2019 has not been declared
void Caffe::SetDevice(const int device_id) {
^
src/caffe/common.cpp: In function \u2018void SetDevice(int)\u2019:
src/caffe/common.cpp:206:42: error: \u2018Get\u2019 was not declared in this scope
if (current_device == device_id && Get().cublas_handle_ && Get().curand_generator_) {
^
src/caffe/common.cpp:212:11: error: \u2018Get\u2019 was not declared in this scope
if (Get().cublas_handle_) CUBLAS_CHECK(cublasDestroy(Get().cublas_handle_));
^
src/caffe/common.cpp:213:11: error: \u2018Get\u2019 was not declared in this scope
if (Get().curand_generator_) {
^
In file included from ./include/caffe/common.hpp:34:0,
from src/caffe/common.cpp:5:
src/caffe/common.cpp:216:34: error: \u2018Get\u2019 was not declared in this scope
CUBLAS_CHECK(cublasCreate(&Get().cublas_handle_));
^
./include/caffe/util/device_alternate.hpp:57:29: note: in definition of macro \u2018CUBLAS_CHECK\u2019
cublasStatus_t status = condition;
^
src/caffe/common.cpp:217:43: error: \u2018Get\u2019 was not declared in this scope
CURAND_CHECK(curandCreateGenerator(&Get().curand_generator_,
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:219:55: error: \u2018Get\u2019 was not declared in this scope
CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(Get().curand_generator_,
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:220:23: error: \u2018cluster_seedgen\u2019 was not declared in this scope
cluster_seedgen()));
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:220:23: note: suggested alternative:
cluster_seedgen()));
^
./include/caffe/util/device_alternate.hpp:64:29: note: in definition of macro \u2018CURAND_CHECK\u2019
curandStatus_t status = condition;
^
src/caffe/common.cpp:13:9: note: \u2018caffe::cluster_seedgen\u2019
int64_t cluster_seedgen(bool sync) {
^
src/caffe/common.cpp: At global scope:
src/caffe/common.cpp:226:6: error: \u2018Caffe\u2019 has not been declared
void Caffe::DeviceQuery() {
^
src/caffe/common.cpp:262:7: error: \u2018Caffe\u2019 has not been declared
class Caffe::RNG::Generator {
^
src/caffe/common.cpp:262:29: error: expected unqualified-id before \u2018{\u2019 token
class Caffe::RNG::Generator {
^
make: *** [.build_release/src/caffe/common.o] Error 1

How to handle my dataset

Hi,
Now I have my own data set, then, what should I do with my data to train?Do I need to convert data into imdb format?

compile the caffe

hello,when I run the make -j8 && make install,it shows the error following:

[ 87%] [ 88%] make[2]: *** No rule to make target /path/to/cudnn/lib64/libcudnn.so', needed by lib/libcaffe.so'. Stop.
make[2]: *** Waiting for unfinished jobs....
Building CXX object src/caffe/CMakeFiles/caffe.dir/data_transformer.cpp.o
Building CXX object src/caffe/CMakeFiles/caffe.dir/syncedmem.cpp.o
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: *** [all] Error 2

I wonder if it is the error of the path?
Another question:
Can I unuse the "cudnn"and "openmpi"?I only have one server but have 4 gpus,Iwonder if I can use the "-gpu al" to replace the "openmpi".
thanks!

ERROR : Run demo.py --gpu 0

Spec : cuda8.0 cudnn5.1

Errors occured below

cudnn_cov_lay.cu:33] check failed: status == CUDNN_STATUS_SUCCESS( 5 vs. 0 ) CUDN_STATUS_INVALID_VALUE

how can i run demo.py by gpu

Error pretrain

Hi @ShuangLI59

When I tried to run experiments/scripts/pretrain.sh, I got the following error:
I0723 15:55:34.441007 21095 layer_factory.hpp:77] Creating layer data
I0723 15:55:34.441382 21095 net.cpp:106] Creating Layer data
I0723 15:55:34.441409 21095 net.cpp:411] data -> data
I0723 15:55:34.441429 21095 net.cpp:411] data -> label
I0723 15:55:34.442243 21105 db_lmdb.cpp:38] Opened lmdb data/psdb/pretrain_db/train_lmdb
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::_S_create
*** Aborted at 1469260534 (unix time) try "date -d @1469260534" if you are using GNU date ***
PC: @ 0x7f6e23885c37 (unknown)
*** SIGABRT (@0x3e800005267) received by PID 21095 (TID 0x7f6defdcf700) from PID 21095; stack trace: ***
@ 0x7f6e23885cb0 (unknown)
@ 0x7f6e23885c37 (unknown)
@ 0x7f6e23889028 (unknown)
@ 0x7f6e2463b535 (unknown)
@ 0x7f6e246396d6 (unknown)
@ 0x7f6e24639703 (unknown)
@ 0x7f6e24639922 (unknown)
@ 0x7f6e2468b3a7 (unknown)
@ 0x7f6e24695262 (unknown)
@ 0x7f6e24696971 (unknown)
@ 0x7f6e24696a2d (unknown)
@ 0x7f6e2546a89a caffe::db::LMDBCursor::value()
@ 0x7f6e2533e5fb caffe::DataReader::Body::read_one()
@ 0x7f6e2533e9c4 caffe::DataReader::Body::InternalThreadEntry()
@ 0x7f6e25484180 caffe::InternalThread::entry()
@ 0x7f6e1cbbea4a (unknown)
@ 0x7f6e178cb184 start_thread
@ 0x7f6e2394937d (unknown)
@ 0x0 (unknown)

Many thanks for help

ask for help

when I run python tools/demo.py
[[11798,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
Host: simen1

Another transport will be used instead, although this may result in
lower performance.

*** The MPI_Comm_rank() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[simen1:30679] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
ubuntu@simen1:~/liushaungwei/person_search$
how can i do

ask for help

How can I get the code of Experiments such as Recall-Precision curves of different detectorsl, the final accuracies and mAPs?

How to train using other datasets

Thanks for your help, I can run the code now. I have understand the pretrain part while the trainning process frustrated me. I want to know how to prepare the other dataset for fitting your code.

Fail to run prepare_roidb(imdb)

Hi, I faced the issue IndexError: list index out of range when I tried to run the training code.
ubuntu@ubuntu-S2600CW:~/Downloads/person_search$ experiments/scripts/train.sh 0 --set EXP_DIR resnet50clear

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
  • GPU_ID=0
  • NET=resnet50
  • DATASET=psdb
  • array=($@)
  • len=4
  • EXTRA_ARGS='--set EXP_DIR resnet50clear'
  • EXTRA_ARGS_SLUG=--set_EXP_DIR_resnet50clear
  • case $DATASET in
  • TRAIN_IMDB=psdb_train
  • TEST_IMDB=psdb_test
  • PT_DIR=psdb
  • ITERS=50000
    ++ date +%Y-%m-%d_%H-%M-%S
  • LOG=experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50clear.txt.2017-07-17_17-50-22
  • exec
    ++ tee -a experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50clear.txt.2017-07-17_17-50-22
  • echo Logging output to experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50clear.txt.2017-07-17_17-50-22
    Logging output to experiments/logs/psdb_train_resnet50_--set_EXP_DIR_resnet50clear.txt.2017-07-17_17-50-22
  • python2 tools/train_net.py --gpu 0 --solver models/psdb/resnet50/solver.prototxt --weights data/imagenet_models/resnet50.caffemodel --imdb psdb_train --iters 50000 --cfg experiments/cfgs/resnet50.yml --rand --set EXP_DIR resnet50clear
    ('sys.path: ', ['/home/ubuntu/Downloads/protobuf-2.6.1/python/dist', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7/google', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7/google/protobuf', '/home/ubuntu/Downloads/person_search/tools/../lib', '/home/ubuntu/Downloads/person_search/tools/../caffe/python', '/home/ubuntu/Downloads/person_search/tools', '/home/ubuntu/digits', '/home/ubuntu/python', '/home/ubuntu/Downloads/person_search', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7/google/protobuf', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7/google', '/home/ubuntu/Downloads/protobuf-2.6.1/python/build/lib.linux-x86_64-2.7', '/home/ubuntu/Downloads/protobuf-2.6.1/python/dist'])
    ('protobuf version: ', '3.2.0')
    Called with args:
    Namespace(cfg_file='experiments/cfgs/resnet50.yml', gpu='0', imdb_name='psdb_train', max_iters=50000, pretrained_model='data/imagenet_models/resnet50.caffemodel', previous_state=None, randomize=True, set_cfgs=['EXP_DIR', 'resnet50clear'], solver='models/psdb/resnet50/solver.prototxt')
    Using config:
    {'DATA_DIR': '/home/ubuntu/Downloads/person_search/data',
    'DEDUP_BOXES': 0.0625,
    'EPS': 1e-14,
    'EXP_DIR': 'resnet50clear',
    'GPU_ID': 0,
    'MATLAB': 'matlab',
    'MODELS_DIR': '/home/ubuntu/Downloads/person_search/models/pascal_voc',
    'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]),
    'RNG_SEED': 3,
    'ROOT_DIR': '/home/ubuntu/Downloads/person_search',
    'TEST': {'BBOX_REG': True,
    'HAS_RPN': True,
    'MAX_SIZE': 1000,
    'NMS': 0.4,
    'PROPOSAL_METHOD': 'selective_search',
    'RPN_MIN_SIZE': 16,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POST_NMS_TOP_N': 300,
    'RPN_PRE_NMS_TOP_N': 6000,
    'SCALES': [600],
    'SVM': False},
    'TRAIN': {'ASPECT_GROUPING': True,
    'BATCH_SIZE': 128,
    'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
    'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
    'BBOX_NORMALIZE_TARGETS': True,
    'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
    'BBOX_REG': True,
    'BBOX_THRESH': 0.5,
    'BG_THRESH_HI': 0.5,
    'BG_THRESH_LO': 0.0,
    'FG_FRACTION': 0.25,
    'FG_THRESH': 0.5,
    'HAS_RPN': True,
    'IMS_PER_BATCH': 1,
    'MAX_SIZE': 1000,
    'PROPOSAL_METHOD': 'gt',
    'RPN_BATCHSIZE': 256,
    'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'RPN_CLOBBER_POSITIVES': False,
    'RPN_FG_FRACTION': 0.5,
    'RPN_MIN_SIZE': 16,
    'RPN_NEGATIVE_OVERLAP': 0.3,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POSITIVE_OVERLAP': 0.7,
    'RPN_POSITIVE_WEIGHT': -1.0,
    'RPN_POST_NMS_TOP_N': 2000,
    'RPN_PRE_NMS_TOP_N': 12000,
    'SCALES': [600],
    'SNAPSHOT_INFIX': '',
    'SNAPSHOT_ITERS': 10000,
    'USE_FLIPPED': True,
    'USE_PREFETCH': False},
    'USE_GPU_NMS': True}
    WARNING: Logging before InitGoogleLogging() is written to STDERR
    I0717 17:50:23.451275 17665 common.cpp:84] You are running caffe compiled with MPI support. Now it's running in non-parallel model
    Setting device 0
    Loaded dataset psdb_train for training
    Set proposal method: gt
    Appending horizontally-flipped training examples...
    done
    Preparing training data...
    Traceback (most recent call last):
    File "tools/train_net.py", line 122, in
    imdb, roidb = combined_roidb(args.imdb_name)
    File "tools/train_net.py", line 75, in combined_roidb
    roidbs = [get_roidb(s) for s in imdb_names.split('+')]
    File "tools/train_net.py", line 72, in get_roidb
    roidb = get_training_roidb(imdb)
    File "/home/ubuntu/Downloads/person_search/tools/../lib/fast_rcnn/train.py", line 102, in get_training_roidb
    rdl_roidb.prepare_roidb(imdb)
    File "/home/ubuntu/Downloads/person_search/tools/../lib/roi_data_layer/roidb.py", line 31, in prepare_roidb
    roidb[i]['image'] = imdb.image_path_at(i)
    IndexError: list index out of range

Fail to compile caffe

Hi, I tried to compile the caffe but failed to generate the caffe.pb.cc.o
ubuntu@ubuntu-S2600CW:~/Downloads/person_search/caffe/build$ cmake .. -DUSE_MPI=ON -DCUDNN_INCLUDE=/usr/local/cuda-8.0/include -DCUDNN_LIBRARY=/usr/local/cuda-8.0/lib64/libcudnn.so
-- Boost version: 1.55.0
-- Found the following Boost libraries:
-- system
-- thread
-- Found gflags (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found PROTOBUF Compiler: /usr/local/bin/protoc
-- Found lmdb (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
-- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
-- Found Snappy (include: /usr/include, library: /usr/lib/libsnappy.so)
-- CUDA detected: 8.0
-- Found cuDNN (include: /usr/local/cuda-8.0/include, library: /usr/local/cuda-8.0/lib64/libcudnn.so)
-- Added CUDA NVCC flags for: sm_35 sm_21
-- Cuda + Boost 1.55: Applying noinline work around
-- OpenCV found (/usr/share/OpenCV)
-- Found Atlas (include: /usr/include, library: /usr/lib/libatlas.so)
-- NumPy ver. 1.12.0 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include)
-- Boost version: 1.55.0
-- Found the following Boost libraries:
-- python
-- Detected Doxygen OUTPUT_DIRECTORY: ./doxygen/

-- ******************* Caffe Configuration Summary *******************
-- General:
-- Version : (Caffe doesn't declare its version in headers)
-- Git : v0.9999-1625-gaed3884
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- Release CXX flags : -O3 -DNDEBUG -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
-- Debug CXX flags : -g -fPIC -Wall -Wno-sign-compare -Wno-uninitialized
-- Build type : Release

-- BUILD_SHARED_LIBS : ON
-- BUILD_python : ON
-- BUILD_matlab : OFF
-- BUILD_docs : ON
-- CPU_ONLY : OFF

-- Dependencies:
-- BLAS : Yes (Atlas)
-- Boost : Yes (ver. 1.55)
-- glog : Yes
-- gflags : Yes
-- protobuf : Yes (ver. 3.2.0)
-- lmdb : Yes (ver. 0.9.10)
-- Snappy : Yes (ver. 1.1.0)
-- LevelDB : Yes (ver. 1.15)
-- OpenCV : Yes (ver. 2.4.8)
-- CUDA : Yes (ver. 8.0)

-- NVIDIA CUDA:
-- Target GPU(s) : Auto
-- GPU arch(s) : sm_35 sm_21
-- cuDNN : Yes

-- Python:
-- Interpreter : /usr/bin/python2.7 (ver. 2.7.6)
-- Libraries : /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.6)
-- NumPy : /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.12.0)

-- Documentaion:
-- Doxygen : /usr/bin/doxygen (1.8.6)
-- config_file : /home/ubuntu/Downloads/person_search/caffe/.Doxyfile

-- Install:
-- Install path : /home/ubuntu/Downloads/person_search/caffe/build/install

-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/Downloads/person_search/caffe/build

ubuntu@ubuntu-S2600CW:~/Downloads/person_search/caffe/build$ make -j8
[ 1%] Building CXX object src/caffe/CMakeFiles/proto.dir///include/caffe/proto/caffe.pb.cc.o
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc: In member function ‘virtual bool caffe::BlobShape::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)’:
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3066:24: error: expected ‘<’ before ‘<:’ token
if (static_cast<::google::protobuf::uint8>(tag) ==
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3066:24: error: expected type-specifier before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3066:24: error: expected ‘>’ before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3066:24: error: expected ‘(’ before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3066:26: error: expected identifier before ‘:’ token
if (static_cast<::google::protobuf::uint8>(tag) ==
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3067:58: error: expected ‘]’ before ‘{’ token
static_cast<::google::protobuf::uint8>(10u)) {
^
In file included from /usr/local/include/google/protobuf/stubs/common.h:40:0,
from /home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.h:9,
from /home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:5:
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc: In lambda function:
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3070:18: error: ‘input’ is not captured
input, this->mutable_dim())));
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3068:11: note: in expansion of macro ‘DO_’
DO_((::google::protobuf::internal::WireFormatLite::ReadPackedPrimitive<
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3070:25: error: ‘this’ was not captured for this lambda function
input, this->mutable_dim())));
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3068:11: note: in expansion of macro ‘DO_’
DO_((::google::protobuf::internal::WireFormatLite::ReadPackedPrimitive<
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3056:68: error: label ‘failure’ used but not defined
#define DO_(EXPRESSION) if (!GOOGLE_PREDICT_TRUE(EXPRESSION)) goto failure
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3068:11: note: in expansion of macro ‘DO_’
DO_((::google::protobuf::internal::WireFormatLite::ReadPackedPrimitive<
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc: In member function ‘virtual bool caffe::BlobShape::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)’:
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3071:9: warning: lambda expressions only available with -std=c++11 or -std=gnu++11 [enabled by default]
} else if (static_cast<::google::protobuf::uint8>(tag) ==
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3071:11: error: expected ‘)’ before ‘else’
} else if (static_cast<::google::protobuf::uint8>(tag) ==
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3071:11: error: expected ‘)’ before ‘else’
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc: In member function ‘virtual bool caffe::BlobProto::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)’:
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3379:24: error: expected ‘<’ before ‘<:’ token
if (static_cast<::google::protobuf::uint8>(tag) ==
^
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3379:24: error: expected type-specifier before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3379:24: error: expected ‘>’ before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3379:24: error: expected ‘(’ before ‘<:’ token
/home/ubuntu/Downloads/person_search/caffe/build/include/caffe/proto/caffe.pb.cc:3379:26: error: expected identifier before ‘:’ token
if (static_cast<::google::protobuf::uint8>(tag) ==
^
make[2]: *** [src/caffe/CMakeFiles/proto.dir///include/caffe/proto/caffe.pb.cc.o] Error 1
make[1]: *** [src/caffe/CMakeFiles/proto.dir/all] Error 2
make: *** [all] Error 2

Many thanks for help

How can I pretrain the model to get a good intial point for softmax loss?

Hello Shuang,
In your work, you have compared the OIM loss with the softmax loss and softmax + pretrain. Obviously, your work is perfect. However, how do you pretrain the model, if you just use the softmax loss? Actually I have read your paper in ArXiv2016, which introduced the process of pretrain. So I try to pretrain the model in CVPR2017 with the help of the work in ArXiv2016. I can't obtain the result in paper which the test mAP should be about 60%. I just obtain 36% test mAP. I really don't konw the reason. Can you tell me the difierences of the pretrain model and the train model. And how should I solve the model with only softmax+pretrain? Thanks you so much! Have a nice day!

Ask for help

Can our algorithm be used in vedio to recognize people?

bounding boxes wander around in a search in video

I am applying this project to search Angela Merkel in a crowd, but the bounding boxes keep wandering over various persons, instead of recognizing her. My threshold is 1293. If I lower the detection threshold one unit further, it does not recognize anything.

resulting video here: https://drive.google.com/file/d/0B86WKpvkt66Bc3o0d3F4eXBPZFU/view?usp=sharing

Input image here:
merkel2

input video here: https://drive.google.com/file/d/0B86WKpvkt66BM3FrZFg5eExtOEk/view?usp=sharing

code that I modified is here:
create_video.py: https://drive.google.com/file/d/0B86WKpvkt66BTWtVYUdmamRkcDg/view?usp=sharing

test_gallery_batch.py
https://drive.google.com/file/d/0B86WKpvkt66BTWtVYUdmamRkcDg/view?usp=sharing

Stuck at demo_detect

Hi there~

I got a problem here running tools/demo.py. The script stuck at demo_detect(net, query_img) tracing back to blobs_out = net.forward(**forward_kwargs) and self._forward(start_ind, end_ind)

There is no other information printed while the python process keeps running(checked by top). This problem won't appear when running on CPU(--gpu -1).

The CUDA version is 8.0. Didn't compile with cudnn 'cuz there would be a Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED issue when running this script.

Thank you.

How to build this source with cudnn v4, cuda 7.0

I'd like to implement this source on TX1, as the newest version of cuda on TX1 is 7.0.
So that, the cudnn version is still v4.
Could yo tell me how to change the minimum version of cudnn while building.

Have trouble building in CPU_ONLY caffe

Hi,
I want to try this demo in my laptop,but I cannot build the caffe. It shows "/XXX/person_search/caffe/src/caffe/common.cpp:103:1: error: ‘Caffe’ does not name a type".
Could you tell me how to do it?
Thanks !

Question: what about difference in clothes?

I don't have an issue, but a question: in my experiments, the search does not work so well when the query and the gallery persons have different clothes.

In the training sets, including your new dataset, the same person always has the same clothes.

Why not detecting the same person with different clothes? That is important in many practical applications.

I can't run Ur code correctly. [id_loss = 87.3365 (* 1 = 87.3365 loss)]

I also get the same problem:

I0430 00:03:16.667212 32035 solver.cpp:240] Iteration 0, loss = 45.2935
I0430 00:03:16.667526 32035 solver.cpp:255] Train net output #0: det_accuracy = 0.03125
I0430 00:03:16.667642 32035 solver.cpp:255] Train net output #1: det_loss = 0.693147 (* 1 = 0.693147 loss)
I0430 00:03:16.667735 32035 solver.cpp:255] Train net output #2: id_accuracy = 0
I0430 00:03:16.667831 32035 solver.cpp:255] Train net output #3: id_loss = 87.3365 (* 1 = 87.3365 loss)
I0430 00:03:16.667927 32035 solver.cpp:255] Train net output #4: loss_bbox = 0 (* 1 = 0 loss)
I0430 00:03:16.668015 32035 solver.cpp:255] Train net output #5: rpn_bbox_loss = 0.472784 (* 1 = 0.472784 loss)
I0430 00:03:16.668109 32035 solver.cpp:255] Train net output #6: rpn_cls_loss = 0.693147 (* 1 = 0.693147 loss)
I0430 00:03:16.668198 32035 solver.cpp:640] Iteration 0, lr = 0.001
I0430 00:04:09.829497 32035 solver.cpp:240] Iteration 20, loss = nan
I0430 00:04:09.829540 32035 solver.cpp:255] Train net output #0: det_accuracy = 0.929688
I0430 00:04:09.829557 32035 solver.cpp:255] Train net output #1: det_loss = 0.618617 (* 1 = 0.618617 loss)
I0430 00:04:09.829567 32035 solver.cpp:255] Train net output #2: id_accuracy = -nan
I0430 00:04:09.829576 32035 solver.cpp:255] Train net output #3: id_loss = 0 (* 1 = 0 loss)
I0430 00:04:09.829586 32035 solver.cpp:255] Train net output #4: loss_bbox = nan (* 1 = nan loss)
I0430 00:04:09.829603 32035 solver.cpp:255] Train net output #5: rpn_bbox_loss = 0.0646796 (* 1 = 0.0646796 loss)
I0430 00:04:09.829617 32035 solver.cpp:255] Train net output #6: rpn_cls_loss = 0.679301 (* 1 = 0.679301 loss)
I0430 00:04:09.829638 32035 solver.cpp:640] Iteration 20, lr = 0.001

I have tried set:
RNG_SEED = 1 or 2, but I still can not solve it.
P.S: I didn't modify any code.

Why I got the nan loss_bbox when i train and eval?

when i train the model ,the loss_bbox is after iteration 20
I0411 17:16:45.481537 19413 solver.cpp:240] Iteration 0, loss = 89.1661
I0411 17:16:45.481566 19413 solver.cpp:255] Train net output #0: det_accuracy = 0.25
I0411 17:16:45.481573 19413 solver.cpp:255] Train net output #1: det_loss = 0.693147 (* 1 = 0.693147 loss)
I0411 17:16:45.481577 19413 solver.cpp:255] Train net output #2: id_accuracy = 0
I0411 17:16:45.481581 19413 solver.cpp:255] Train net output #3: id_loss = 87.3365 (* 1 = 87.3365 loss)
I0411 17:16:45.481586 19413 solver.cpp:255] Train net output #4: loss_bbox = 0.646189 (* 1 = 0.646189 loss)
I0411 17:16:45.481590 19413 solver.cpp:255] Train net output #5: rpn_bbox_loss = 0.0912708 (* 1 = 0.0912708 loss)
I0411 17:16:45.481595 19413 solver.cpp:255] Train net output #6: rpn_cls_loss = 0.693147 (* 1 = 0.693147 loss)
I0411 17:16:45.481600 19413 solver.cpp:640] Iteration 0, lr = 0.001

I0411 17:17:08.716578 19413 solver.cpp:240] Iteration 20, loss = nan
I0411 17:17:08.716603 19413 solver.cpp:255] Train net output #0: det_accuracy = 0.898438
I0411 17:17:08.716612 19413 solver.cpp:255] Train net output #1: det_loss = 0.628226 (* 1 = 0.628226 loss)
I0411 17:17:08.716616 19413 solver.cpp:255] Train net output #2: id_accuracy = 0
I0411 17:17:08.716621 19413 solver.cpp:255] Train net output #3: id_loss = 87.3365 (* 1 = 87.3365 loss)
I0411 17:17:08.716625 19413 solver.cpp:255] Train net output #4: loss_bbox = nan (* 1 = nan loss)
I0411 17:17:08.716629 19413 solver.cpp:255] Train net output #5: rpn_bbox_loss = 0.266092 (* 1 = 0.266092 loss)
I0411 17:17:08.716634 19413 solver.cpp:255] Train net output #6: rpn_cls_loss = 0.684319 (* 1 = 0.684319 loss)
I0411 17:17:08.716639 19413 solver.cpp:640] Iteration 20, lr = 0.001

when i run experiments/scripts/eval_test.sh resnet50 50000 resnet50 there are errors.
/lib/datasets/psdb.py" lines 150
for gt, det in zip(gt_roidb, gallery_det):

det=[[ nan nan nan nan 0.1167134]
[ nan nan nan nan 0.1167134]
[ nan nan nan nan 0.1167134]
...,
[ nan nan nan nan 0.1167134]
[ nan nan nan nan 0.1167134]
[ nan nan nan nan 0.1167134]]
([], [])

Cannot find Normalize type for feat layer

Hi,

I am trying to run the demo.py using CPU only. I encountered the problem of undefined Normalize type which is defined in 'feat' layer. And I checked the caffe folder, and there is no Normalize layer code there.

Do you have any clue to solve this problem? Thank you very much for your help.

IOError: [Errno 2] No such file or directory

When I run experiments/scripts/eval_test.sh resnet50 50000 resnet50 to evaluate the pre-trained model, I got the following error. It looks like something based on MATLAB.
IOError: [Errno 2] No such file or directory: '/path/to/person_search/data/psdb/dataset/annotation/pool.mat'
Could you help me with?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.