Giter Club home page Giter Club logo

Comments (14)

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

Found an excellent resource for model implementations at https://github.com/leondgarse/keras_cv_attention_models#recognition-models, which should accelerate trying out new models.

I've been doing some catch up on recent advancements of the past two or so years. While ConvNextV2 is intriguing due to model size, I think a better approach for this use case will be to not only focus on the model size but also on the pretraining. In particular, I think the DINO/DINOv2 and CLIP-related pretraining approaches are particularly helpful due to the robustness in distribution shift. Not only is the training material much closer to our target distribution, but generally the models will be much stronger.

To test this theory out, I tried a DINOv2 finetune out (dense layer only + gradual weight change on last 15 layers) and got excellent results, better than I had seen for some of my more half-hearted approaches to e.g. Inceptions and/or ResNets. The only challenge there is that the smallest model available uses a whopping 47.23G FLOPS (vs. 0.72G for say EfficientNetV1 B0). A bit surprisingly, I was able to successfully convert it to TensorflowJS but it was slow to the level of several seconds per image prediction. Still, a useful experiment to demonstrate effectiveness of a stronger model. The dataset has also increased somewhat, so it's not quite apples to apples but notice the improvement in the DET and ROC curves.
SQRXR 112 (EfficientNet Lite L0-based):
post_det
post_roc

SQRXR 119 (DINOv2):
post_det
post_roc

I'm going to check out an EVA02 based model next as it is CLIP-based, but it still weighs in at 4.72G FLOPS so it's in theory going to be a few times slower than the current.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

So I tried out the smallest EVA02, EVA02TinyPatch14. I tried training with various finetunes changing the number of layers of the graph I retuned. Results were OK, but final DET graphs showed poor and/or uneven performance. My hypothesis is to try the next size up EVA02 and see if I gain more of the smoothness and performance I observed in the megasized DINOv2.

For comparison, here's a DET output from EVA02Tiny (SQRXR 120):
post_det

Now compare that to the current deployed EfficientNetLite L0 based approach (SQRXR 119):
post_det

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

Results from EVA02 larger model were decent but not enough to justify performance penalty.
SQRXR 121 EVA02Small:
post_det
post_roc

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

EfficientFormerV2S2 seems like a potential for incremental improvement. (SQRXR 122)
post_roc
post_det

Still a little squiggly on the bottom of the DET curve, not such a fan for FPR on the "trusted" zone. Still good though and inference only increased from about 68ms for SQRXR 112 to 82ms for this one SQRXR 122.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

I had a few more experiments.

  1. I tried playing around with Uniformer. However, I was getting reasonable but much lower than expected accuracy so I suspect I'm not integrating something correctly.
  2. I worked with EdgeNext_Base. DET wasn't as promising as it could be:
    post_det
  3. I returned to an EfficientFormer variant, EfficientFormerV2L. Performance was not better than current, and was actually no better than EfficientFormerV2S2, which surprised me:
    post_det
  4. I returned to the possible incremental results from EfficientFormerV2S2 and looked at ways to smooth out the DET curve by playing with training, making some updates like a switch to AdamW in a couple places. This was successful. SQRXR 127 (compare to SQRXR 122 in last comment):
    post_det
    post_roc
    It's a subtle improvement over SQRXR 112 because the DET curve bows in slightly. For example, 5% FNR crosses 20% FPR on SQRXR 112 but SQRXR 127 is clearly below. The non-linear scaling is something to watch carefully here as subtle changes can actually mean bigger changes in the final model performance.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

I tried working with EfficientViT B2 as SQRXR 128. Training went well and overall results were promising:
post_det
post_roc
Unfortunately the resulting model was a bit difficult to both reload as well as to convert into TF.js. The use of 'hard_swish' did not play well, but I was able to coax the data into a custom layer instead of a function and got it to reload; however, the use of the PartitionedCall op ultimately meant TF.js couldn't handle it. Might be something to return to as there may be a way to coax the model to not have a PartitionedCall at some point but it's not obvious.

Next up trying RepViT M11 as SQRXR 129. Training was OK but did not seem to provide much advantage over say SQRXR 127.
post_det
post_roc

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

Next - Levit 256 as SQRXR 130:
post_det
post_roc
Marginal advantage on ROC over baseline 112, less good DET.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

CMT XS Torch as SQRXR 131:
post_det
post_roc
Marginal advantage on ROC AUC, disadvantage on DET.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

TinyViT 11 as SQRXR 132:
post_det
post_roc
Marginal advantage on ROC AUC, disadvantage on DET.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

EfficientNetV2 B0 as SQRXR 133:
post_det
post_roc
No advantage.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

EfficientFormerV2S0 as a smaller variant of an earlier experiment. SQRXR 134:
post_det
post_roc
No improvement, and unsurprisingly not as good as V2S2.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

Tried a bit different training regime with current EfficientNetLite L0 using some of the other advances like swapping out for AdamW. SQRXR 135:
post_det
post_roc
About the same but the DET curve's a bit more gnarly at the beginning. No clear advantage. Still, I think there may be something to the training technique approach here.

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

GCViT XTiny - a bit bigger model. Shows in the performance. SQRXR 136:
post_det
post_roc
Definite improvement. Scanning speed is slow but sort-of-tolerable. Might be useful as a bigger model. The adventurous can try it out on the test branch while it sticks around: https://github.com/wingman-jr-addon/wingman_jr/tree/sqrxr-136

from model.

wingman-jr-addon avatar wingman-jr-addon commented on July 18, 2024

I've been trying this out ... and I'm not sure it's fast enough or good enough to become the next top model yet. I might need to keep searching.

from model.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.