Comments (6)
Hi I have some updates in my repository about mobilenet result, please have a look.
Thanks. :)
mAP: 64.19% | top-1: 81.44% top-2: 87.80% | top-5: 92.93% | top-10: 95.67%
from triplet-reid.
Very cool, thank you for sharing!
IIRC, @Pandoro had success training a MobileNet model for fun, but I would be interested in hearing about your experience (and scores).
This inspired me to add a corresponding section to the end of our README. I'm closing this issue because technically it's not an open issue, but I am curious so if you have updates/scores/results I would be happy to hear about them here!
from triplet-reid.
Hey @cftang0827
thank you for sharing!
I just dug out some really really old mobilent results. In fact they were located on @lucasb-eyer computer :p I produced evaluation results with our current code and there might be a missmatch between version here, but I'm getting the following numbers:
mAP: 60.90% | top-1: 79.99% top-2: 86.73% | top-5: 91.98% | top-10: 94.86%
Looking at the args file I used p=32
and k=4
, leaving everything else pretty much the same. All details about this are very fuzzy though in my head, so I can't really recall the remaining experiment setup. So yours scores are quite a bit better! Would you mind sharing your exact setup (args.json) and especially how fast the mobilenet is compared to the ResNet-50? Because as far as I recall, it wasn't actually as fast as I had hoped it to be.
from triplet-reid.
Hi, I had made a mistake, because I use data augmentation during my training, so I would like to correct my result, here is some information about evaluation, if there's any problem, please let me know.
python embed.py \
--experiment_root train_mobile \
--dataset data/market1501_test.csv \
--filename test_embeddings.h5 \
--flip_augment \
--crop_augment five \
--aggregator mean
python embed.py \
--experiment_root train_mobile \
--dataset data/market1501_query.csv \
--filename query_embeddings.h5 \
--flip_augment \
--crop_augment five \
--aggregator mean
./evaluate.py \
--excluder market1501 \
--query_dataset data/market1501_query.csv \
--query_embeddings train_mobile/query_embeddings.h5 \
--gallery_dataset data/market1501_test.csv \
--gallery_embeddings train_mobile/test_embeddings.h5 \
--metric euclidean \
--filename train_mobile/market1501_evaluation.json
The result
totalMemory: 10.91GiB freeMemory: 10.24GiB
2018-07-07 10:26:53.779916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-07-07 10:26:53.942791: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-07 10:26:53.942825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-07-07 10:26:53.942844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-07-07 10:26:53.943049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9904 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Evaluating batch 3328-3368/3368
mAP: 66.28% | top-1: 83.11% top-2: 88.42% | top-5: 93.79% | top-10: 95.90%
Here is my arg.json
{
"batch_k": 4,
"batch_p": 18,
"checkpoint_frequency": 1000,
"crop_augment": true,
"decay_start_iteration": 15000,
"detailed_logs": false,
"embedding_dim": 128,
"experiment_root": "train_mobile",
"flip_augment": true,
"head_name": "fc1024",
"image_root": "Market-1501-v15.09.15/",
"initial_checkpoint": "mobilenet_v1_1.0_224.ckpt",
"learning_rate": 0.0003,
"loading_threads": 8,
"loss": "batch_hard",
"margin": "soft",
"metric": "euclidean",
"model_name": "mobilenet_v1_1_224",
"net_input_height": 256,
"net_input_width": 128,
"pre_crop_height": 288,
"pre_crop_width": 144,
"resume": false,
"train_iterations": 25000,
"train_set": "data/market1501_train.csv"
}
Just the same with the version in your repository, I think maybe smaller p is the key?
BTW, I thought maybe sklearn is a problem, either.
>>> import sklearn
>>> sklearn.__version__
'0.19.1'
In the last, here is my little test about time issue between mobilenet and resnet.
For the convenient, I just use 3 same images for test by using my wrapper.
Resnet 50
t1 == timeittimeit..default_timerdefault ()
human_1_1_vector = api.human_vector(human_1_1)
human_1_2_vector = api.human_vector(human_1_2)
human_2_1_vector = api.human_vector(human_2_1)
t2 = timeit.default_timer()
print('Time elapsed: {} sec'.format(round(t2 - t1, 3)))
Time elapsed: 0.539 sec
Mobilenet 224
t1 == timeittimeit..default_timerdefault_ ()
human_1_1_vector = api.human_vector(human_1_1)
human_1_2_vector = api.human_vector(human_1_2)
human_2_1_vector = api.human_vector(human_2_1)
t2 = timeit.default_timer()
print('Time elasped: {} sec'.format(round(t2-t1, 3)))
Time elasped: 0.216 sec
If there's any problem, please kindly let me know.
Thanks again for the good paper and code.
from triplet-reid.
Thanks a lot for sharing your experience! Almost exactly twice the speed and only slightly worse mAP is actually pretty good results!
from triplet-reid.
Yes! Thank you for the details! In fact you are even gaining a factor 2.5 speed-wise! I guess it's still not really real-time yet given that per image it's still roughly 70ms, but it's a good push! I guess you are using a 1080 Ti?
If you are interested to optimize for speed, you could also try smaller images. It's likely that this will result in worse scores, but the Mobilenet might behave differently than the ResNet-50. So you could possibly gain another factor 4 while only losing a little bit of performance if you decrease the images to 128x64.
On more thing is your sklearn version, as you mentioned you are using 1.9.1, so please take care when comparing your results to our results from the paper. With this new version the evaluation gives slightly better results as we explained in the ReadMe. I'm guessing compared to the ResNet-50 model you are losing about 4% mAP, but that is pretty acceptable if speed is a concern!
from triplet-reid.
Related Issues (20)
- encoding error when try to evaluate the model on customized dataset HOT 1
- About feature visualizing
- What would happen if a pid has less than batch_k images? HOT 1
- Retraining on ssd_mobilenet_v2_coco HOT 1
- Embed a single image HOT 14
- Images not found HOT 4
- how to evaluate my own model?
- Dataset not found HOT 1
- Tensorflow loading latest checkpoint doesn't work HOT 2
- train two classes how to change code?
- two class loss can not decline HOT 1
- trinet_embed & embed question
- Stability Issue HOT 1
- How to do inference on single image/query?
- Loss doesn't go down lower than 0.7 for market1501 dataset HOT 5
- Transfer Learning, freezing layers
- Overfitting?
- ResourceExhaustedError: OOM HOT 1
- Multi camera Tracking
- hard triplet loss HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from triplet-reid.