- Use Kaggle notebook to train
- Google Landmark Recognition 2021
- Kaggle
- 81313 Class choose the most amount of 30000 classes.
- Each class choose 10 images ( random ) => 300000 Images
- Advantage : Data Balance.
- Disadvantage : Not Enough Data.
- Batch Size: 64
- Resize to image : [64, 64]
-
EfficientNet – B7
-
Parameters : 140,616,960
-
Loss : CrossEntropyLoss
-
Learning Rate : 1e-3
-
Optimizer : radam
-
Epoch : 20
-
ResNet 50
-
Parameters : 55,587,032
-
Loss : CrossEntropyLoss
-
Learning Rate : 1e-3
-
Optimizer : radam
-
Epoch : 20
-
N is the total number of predictions returned by the system, across all queries
-
M is the total number of queries with at least one sample from the training set visible in it (note that some queries may not depict samples)
-
P(i) is the precision at rank i. (example: consider rank 3 - we have already made 3 predictions, and 2 of them are correct. Then P(3) will be 2/3)
-
rel(i) denotes the relevance of prediciton i: it’s 1 if the i-th prediction is correct, and 0 otherwise
- Because of the training limit, we can’t train too many data and train more bigger model.
- EfficientNet is more better than ResNet 50, But EfficientNet is more bigger than ResNet 50
- The score in Leaderboard is not good.
- We should think how to train more class and keep the data balanced