Giter Club home page Giter Club logo

Comments (6)

cftang0827 avatar cftang0827 commented on July 23, 2024 1

Hi I have some updates in my repository about mobilenet result, please have a look.
Thanks. :)

mAP: 64.19% | top-1: 81.44% top-2: 87.80% | top-5: 92.93% | top-10: 95.67%

GitHub Logo

from triplet-reid.

lucasb-eyer avatar lucasb-eyer commented on July 23, 2024

Very cool, thank you for sharing!

IIRC, @Pandoro had success training a MobileNet model for fun, but I would be interested in hearing about your experience (and scores).

This inspired me to add a corresponding section to the end of our README. I'm closing this issue because technically it's not an open issue, but I am curious so if you have updates/scores/results I would be happy to hear about them here!

from triplet-reid.

Pandoro avatar Pandoro commented on July 23, 2024

Hey @cftang0827

thank you for sharing!

I just dug out some really really old mobilent results. In fact they were located on @lucasb-eyer computer :p I produced evaluation results with our current code and there might be a missmatch between version here, but I'm getting the following numbers:

mAP: 60.90% | top-1: 79.99% top-2: 86.73% | top-5: 91.98% | top-10: 94.86%

Looking at the args file I used p=32 and k=4, leaving everything else pretty much the same. All details about this are very fuzzy though in my head, so I can't really recall the remaining experiment setup. So yours scores are quite a bit better! Would you mind sharing your exact setup (args.json) and especially how fast the mobilenet is compared to the ResNet-50? Because as far as I recall, it wasn't actually as fast as I had hoped it to be.

from triplet-reid.

cftang0827 avatar cftang0827 commented on July 23, 2024

Hi, I had made a mistake, because I use data augmentation during my training, so I would like to correct my result, here is some information about evaluation, if there's any problem, please let me know.

python embed.py \
    --experiment_root train_mobile \
    --dataset data/market1501_test.csv \
    --filename test_embeddings.h5 \
    --flip_augment \
    --crop_augment five \
    --aggregator mean
python embed.py \
    --experiment_root train_mobile \
    --dataset data/market1501_query.csv \
    --filename query_embeddings.h5 \
    --flip_augment \
    --crop_augment five \
    --aggregator mean
./evaluate.py \
    --excluder market1501 \
    --query_dataset data/market1501_query.csv \
    --query_embeddings train_mobile/query_embeddings.h5 \
    --gallery_dataset data/market1501_test.csv \
    --gallery_embeddings train_mobile/test_embeddings.h5 \
    --metric euclidean \
    --filename train_mobile/market1501_evaluation.json

The result

totalMemory: 10.91GiB freeMemory: 10.24GiB
2018-07-07 10:26:53.779916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-07-07 10:26:53.942791: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-07 10:26:53.942825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0
2018-07-07 10:26:53.942844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N
2018-07-07 10:26:53.943049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9904 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Evaluating batch 3328-3368/3368
mAP: 66.28% | top-1: 83.11% top-2: 88.42% | top-5: 93.79% | top-10: 95.90%

Here is my arg.json

{
  "batch_k": 4,
  "batch_p": 18,
  "checkpoint_frequency": 1000,
  "crop_augment": true,
  "decay_start_iteration": 15000,
  "detailed_logs": false,
  "embedding_dim": 128,
  "experiment_root": "train_mobile",
  "flip_augment": true,
  "head_name": "fc1024",
  "image_root": "Market-1501-v15.09.15/",
  "initial_checkpoint": "mobilenet_v1_1.0_224.ckpt",
  "learning_rate": 0.0003,
  "loading_threads": 8,
  "loss": "batch_hard",
  "margin": "soft",
  "metric": "euclidean",
  "model_name": "mobilenet_v1_1_224",
  "net_input_height": 256,
  "net_input_width": 128,
  "pre_crop_height": 288,
  "pre_crop_width": 144,
  "resume": false,
  "train_iterations": 25000,
  "train_set": "data/market1501_train.csv"
}

Just the same with the version in your repository, I think maybe smaller p is the key?

BTW, I thought maybe sklearn is a problem, either.

>>> import sklearn
>>> sklearn.__version__
'0.19.1'

In the last, here is my little test about time issue between mobilenet and resnet.
For the convenient, I just use 3 same images for test by using my wrapper.

Resnet 50

t1  ==  timeittimeit..default_timerdefault ()
human_1_1_vector = api.human_vector(human_1_1)
human_1_2_vector = api.human_vector(human_1_2)
human_2_1_vector = api.human_vector(human_2_1)
t2 = timeit.default_timer()
print('Time elapsed: {} sec'.format(round(t2 - t1, 3)))
Time elapsed: 0.539 sec

Mobilenet 224

t1  ==  timeittimeit..default_timerdefault_ ()
human_1_1_vector = api.human_vector(human_1_1)
human_1_2_vector = api.human_vector(human_1_2)
human_2_1_vector = api.human_vector(human_2_1)
t2 = timeit.default_timer()
print('Time elasped: {} sec'.format(round(t2-t1, 3)))
Time elasped: 0.216 sec

If there's any problem, please kindly let me know.
Thanks again for the good paper and code.

from triplet-reid.

lucasb-eyer avatar lucasb-eyer commented on July 23, 2024

Thanks a lot for sharing your experience! Almost exactly twice the speed and only slightly worse mAP is actually pretty good results!

from triplet-reid.

Pandoro avatar Pandoro commented on July 23, 2024

Yes! Thank you for the details! In fact you are even gaining a factor 2.5 speed-wise! I guess it's still not really real-time yet given that per image it's still roughly 70ms, but it's a good push! I guess you are using a 1080 Ti?

If you are interested to optimize for speed, you could also try smaller images. It's likely that this will result in worse scores, but the Mobilenet might behave differently than the ResNet-50. So you could possibly gain another factor 4 while only losing a little bit of performance if you decrease the images to 128x64.

On more thing is your sklearn version, as you mentioned you are using 1.9.1, so please take care when comparing your results to our results from the paper. With this new version the evaluation gives slightly better results as we explained in the ReadMe. I'm guessing compared to the ResNet-50 model you are losing about 4% mAP, but that is pretty acceptable if speed is a concern!

from triplet-reid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.