Giter Club home page Giter Club logo

online-action-detection's People

Contributors

vividle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

online-action-detection's Issues

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

About exemplars of the static branch

Thank you for your excellent work! I would like to ask a question which is how can we get the exemplars of all categories, it seems like not be implemented in the codes. Thank you!

The question about model precision

I trained your model on my device with THUMOS dataset without any modification, but the mAP is only 65.39. Since there is a gap between this result and the validation result (66.91) of the model you posted, I wonder if you are using some tricks. If not, can it be understood that the gap is only due to equipment differences.

[Epoch-7] [IDU-kinetics] mAP: 0.6539
BaseballPitch: 0.4485
BasketballDunk: 0.8277
Billiards: 0.2734
CleanAndJerk: 0.7331
CliffDiving: 0.8972
CricketBowling: 0.4617
CricketShot: 0.3120
Diving: 0.8743
FrisbeeCatch: 0.4104
GolfSwing: 0.7853
HammerThrow: 0.8585
HighJump: 0.7666
JavelinThrow: 0.7920
LongJump: 0.8093
PoleVault: 0.9041
Shotput: 0.6848
SoccerPenalty: 0.4923
TennisSwing: 0.6288
ThrowDiscus: 0.6688
VolleyballSpiking: 0.4487

features of HDD and TV

Thanks for sharing the code. Would you please provide the features of two other benchmarks or give some guidance to get the features.

the corresponding paper

Hi,I am searching the code of the paper ”Structured Attention Composition for Temporal Action Localization“ ,and the url provided by arXiv is linked to this project. But it seems no relation between the code and the paper.

Some Details of The Paper

Hi,
Thank you for sharing the code. When I was reading your papers, I met some problems~
In Tabel 4, I found that you write that given a one-minute video, the running time of extracting the RGB feature is 2.3 seconds; I wonder how you calculate the time? If it was the time cost by extracting feature that used Res-200?

HDD dataset visual features

Hi @VividLe,

I have a honda driving dataset that has images (3fps) and I would like to use only the RGB features or visual features of the honda dataset. How would it be possible?
I couldn't find the optical flow feature and am not sure how to get the dataset.
My task is to reconstruct one modality from the other one using a share multimodal representation (sensor and videos(images frames)

Thanks

Baidu Cloud

Excuse me, what is the extraction code of Baidu Cloud?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.