Giter Club home page Giter Club logo

adafocus's Introduction

Hi there 👋

I’m currently a Ph.D. student at Tsinghua University. 🔭

Yulin's GitHub stats

adafocus's People

Contributors

blackfeather-wang avatar frozenburning avatar jianghaojun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

adafocus's Issues

About AdaFocus+

Thanks for your great work!

I want to know which part of the codes is about AdaFouces+ since I wonder how to batch-process videos with different number of frames for some skipped ones. Or does it process per frame in a video one by one?

Independent spatial focus along temporal dimension

Thanks for your great job which will motivate a lot of works in this area! After checking your code, i found that for frames in a video it seems to assign a identical spatial sampling location. I wonder if it is true? If it is, where do the locations in fig.7 which are independent for each frame come from?

Train on UCF101

I use the following parameters and take mobilenet and resnet-50 which trained by TSN as pre-trained. But the training results are strange!From the beginning, the training accuracy has reached 100%, while the test accuracy is basically unchanged.
CUDA_VISIBLE_DEVICES=4 python stage1.py
dataset=ucf101
data_dir=/data/ymy/data/
train_stage=1
batch_size=32
num_segments_glancer=8
num_segments_focuser=12
glance_size=224
patch_size=144
random_patch=True
epochs=50
backbone_lr=0.001
fc_lr=0.01
lr_type=step
dropout=0.5
load_pretrained_focuser_fc=False
dist_url=tcp://127.0.0.1:8816
eval_freq=1
start_eval=0
print_freq=25
workers=16
pretrained_glancer='/AdaFocus-main/new_mobile.tar'
pretrained_focuser='/AdaFocus-main/new_resnet.tar'

Epoch: [5][ 0/298] Time 43.183 (43.183) Data 42.607 (42.607) Loss 1.1841e-03 (1.1841e-03) Acc@1 100.00 (100.00) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][ 25/298] Time 0.674 ( 2.839) Data 0.107 ( 2.276) Loss 1.7993e-03 (8.2321e-03) Acc@1 100.00 ( 99.76) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][ 50/298] Time 1.080 ( 2.122) Data 0.526 ( 1.560) Loss 1.7797e-02 (1.1389e-02) Acc@1 100.00 ( 99.63) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][ 75/298] Time 0.615 ( 1.833) Data 0.048 ( 1.272) Loss 2.5565e-04 (1.1153e-02) Acc@1 100.00 ( 99.63) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][100/298] Time 0.624 ( 1.724) Data 0.056 ( 1.163) Loss 1.6186e-03 (9.6181e-03) Acc@1 100.00 ( 99.72) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][125/298] Time 0.640 ( 1.601) Data 0.082 ( 1.041) Loss 6.2654e-02 (9.9088e-03) Acc@1 96.88 ( 99.68) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][150/298] Time 0.618 ( 1.596) Data 0.061 ( 1.036) Loss 1.9718e-04 (9.0484e-03) Acc@1 100.00 ( 99.71) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][175/298] Time 0.673 ( 1.526) Data 0.107 ( 0.965) Loss 1.8096e-03 (9.6376e-03) Acc@1 100.00 ( 99.70) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][200/298] Time 0.630 ( 1.523) Data 0.061 ( 0.962) Loss 2.6468e-03 (9.3167e-03) Acc@1 100.00 ( 99.72) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][225/298] Time 11.313 ( 1.514) Data 10.754 ( 0.952) Loss 9.3352e-03 (9.5301e-03) Acc@1 100.00 ( 99.72) Acc@5 100.00 (100.00) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][250/298] Time 0.643 ( 1.475) Data 0.086 ( 0.913) Loss 1.7089e-03 (1.0416e-02) Acc@1 100.00 ( 99.70) Acc@5 100.00 ( 99.99) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][275/298] Time 0.604 ( 1.472) Data 0.046 ( 0.910) Loss 1.3999e-03 (9.9850e-03) Acc@1 100.00 ( 99.73) Acc@5 100.00 ( 99.99) Focuser BackBone LR: 0.001 FC LR: 0
Epoch: [5][297/298] Time 0.647 ( 1.410) Data 0.094 ( 0.848) Loss 1.0134e-03 (1.0606e-02) Acc@1 100.00 ( 99.72) Acc@5 100.00 ( 99.98) Focuser BackBone LR: 0.001 FC LR: 0
Test: [ 0/119] Time 21.262 (21.262) Loss 6.7998e-01 (6.7998e-01) Acc@1 81.25 ( 81.25) Acc@5 100.00 (100.00)
Test: [ 25/119] Time 0.381 ( 1.223) Loss 2.3228e-01 (6.1051e-01) Acc@1 93.75 ( 85.10) Acc@5 100.00 ( 97.60)
Test: [ 50/119] Time 0.366 ( 0.818) Loss 4.2509e-01 (8.5970e-01) Acc@1 93.75 ( 81.07) Acc@5 96.88 ( 95.22)
Test: [ 75/119] Time 0.406 ( 0.680) Loss 2.0299e-01 (1.0306e+00) Acc@1 93.75 ( 78.12) Acc@5 100.00 ( 93.09)
Test: [100/119] Time 0.362 ( 0.609) Loss 3.9213e-01 (9.9937e-01) Acc@1 96.88 ( 78.53) Acc@5 96.88 ( 93.56)
Test: [118/119] Time 0.122 ( 0.571) Loss 1.6555e+00 (9.4728e-01) Acc@1 28.57 ( 79.33) Acc@5 100.00 ( 94.00)
Testing Results: Prec@1 79.329 Prec@5 93.999 Loss 0.94728

About other datasets

I'm very interseted in your works. But when I experimented with the UCF101 dataset, the results were not encouraging ( just around 1%). Looking forward to your reply. THX!

The parameters of the experiment are as follows:

CUDA_VISIBLE_DEVICES=0,3,4,5 python stage1.py
dataset=ucf101
data_dir=/data/ymy/data/
train_stage=1
batch_size=32
num_segments_glancer=8
num_segments_focuser=12
glance_size=224
patch_size=144
random_patch=True
epochs=10
backbone_lr=0.00001
fc_lr=0.01
lr_type=cos
dropout=0.5
load_pretrained_focuser_fc=False
dist_url=tcp://127.0.0.1:8816
eval_freq=1
start_eval=0
print_freq=25
workers=16
pretrained_glancer='/data/AdaFocus-main/mobilenetv2_segment8.pth.tar'
pretrained_focuser='/data/AdaFocus-main/resnet50_segment12.pt.tar' # load the pretrained model

hydra error

Hi, when I run the program. It will raise an error about the hydra module.
It seems that the "strict" parameter has already been deprecated from the hydra 1.0 version. But after I remove the "strict".
Another error about the default.yaml appears.
image
Also I don't see the "pretty" argument in the "default.yaml" file.
Thanks for your help in advance.

About AdaFocus+

Hello. Thanks for your work.
Do you have the checkpoints and the corresponding codes for evaluating AdaFocus+, which leverages temporal redundancy in your release?

About eval with SCSampler

A wonderful work!

But I have a problem with evaluation. I can't flnd the code about the article SCSampler:Sampling Salient Clips from Video for Efficient Action Recognition. How can i evaluate these two models on a certain dataset?

Cannot reproduce 75.0 mAP with 128x128 patch

With the same setting and same checkpoint (128s3_checkpoint.pth.tar), 75.0 mAP cannot be reproduced in my environment (achieved 74.4 ). The difference I know is that I use FPS1 frames, the data list provided seems to be 30 fps. However, as far as I know, FPS should not make such a big difference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.