Giter Club home page Giter Club logo

temporal-adaptive-module's People

Contributors

liu-zhy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

temporal-adaptive-module's Issues

TAM module in the ResBlock

Hello, thank you very much for your work. Where is the appropriate location of the TAM module in the ResBlock in SlowFast?

swapping labels for training data of Something-something v2

Hi Zhaoyang,

thanks for sharing the nice implementations!
I have a question regarding the data processing of Something-something v2.
I notice that for data on Something-something v2, you hard code label_transforms to swap the labels for 3 groups of classes: 86 and 87, 93 and 94, 166 and 167 (line 458 in ops/models.py). However this is only done for training, not for validation or test. I wonder if this means that there are errors in the annotation of training data of Something-something v2.

Looking forward to your reply and thanks for the efforts.

Best,

Wei

n_segment?

TAM中的n_segment表示的输入视频的帧数吗?

About n_segment

n_segment是视频序列中帧的个数,但如果不确定视频序列帧数应该怎么办呢,这里不能自适应调整嘛?

The pretrained models seem don't work

Hi, thanks for your great work. I tried the pretrained somthingv1-8f and somethingv1-16f checkpoints and only got 0.5% test accuracy. Maybe there are some mistakes in these ckpts. Could you please check that? Or is there anything I need to do with the ssthv1 frames? I'm using the original frames without resizing them before being loaded.

test model and get error

Hi, thanks for your awesome work in video recognition and also the release.

I run the test command but get errors.

CUDA_VISIBLE_DEVICES=1 python -u test_models.py kinetics \
--weights=./checkpoints/kinetics_RGB_resnet50_tam_avg_segment16_e100_dense/ckpt.best.pth.tar \
--test_segments=16 --test_crops=3 \
--full_res --sample dense-10 --batch_size 1

My envs: python3.7, torch 1.6.0, cuda version 11.0
error log:

  return self.module(*inputs[0], **kwargs[0])
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/sean/workspace/temporal-adaptive-module/ops/models.py", line 327, in forward
    output = self.consensus(base_out)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/sean/workspace/temporal-adaptive-module/ops/basic_ops.py", line 46, in forward
    return SegmentConsensus(self.consensus_type, self.dim)(input)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/autograd/function.py", line 149, in __call__
    "Legacy autograd function with non-static forward method is deprecated. "
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/_utils/pin_memory.py", line 25, in _pin_memory_loop
    r = in_queue.get(timeout=MP_STATUS_CHECK_INTERVAL)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/multiprocessing/queues.py", line 113, in get
    return _ForkingPickler.loads(res)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 282, in rebuild_storage_fd
    fd = df.detach()
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/multiprocessing/connection.py", line 492, in Client
    c = SocketClient(address)
  File "/home/sean/miniconda3/envs/openmmlab/lib/python3.7/multiprocessing/connection.py", line 620, in SocketClient
    s.connect(address)
ConnectionRefusedError: [Errno 111] Connection refused

So could you please help me to figure it out? thx

Learning rate of nn.Linear

Thanks for open source such great work.
I notice that all the learning rate of linear layers are x5, even in all the temporal adaptive module. I know that normally for the last fully connected layer, larger learning rate would bring better performance. Is this a mistake? Or it can produce better result?

About MobileNetV2 arch

Thank you for your work.
And do you implement MobileNetV2-TAM arch? Could you release those code?

Thanks!

awesome works

We have tested your several related works on our own large real dataset and the result is exciting. Respect bro.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.