Comments (4)
I also remember that during training gpu load was ~50-70% while during the 1-by-1 prediction it ramped up to 100%. So i would assume it is related to torchtext parts.
from mdvc.
Yes, with your help ,I know how to solve this problem. Thanks a lot !
from mdvc.
Yes. I am glad you asked.
I am not sure if both are related because I remember my CPU was using 16 threads at 100% load when training the model. It is even with the num_workers = 0 (I mean the default option).
About the data loader. Yes, you are right it is a bit strange. I came up with this workaround because previously (maybe even still) torchtext
and PyTorch
were using different approaches for data loading which made it very painful to combine the good stuff about text from torchtext with vision-style pipelines.
An explanation of how it works is a bit tricky:
It is a tricky part.
The DataLoader
which is going to wrap this dataset
Lines 220 to 221 in df3b88a
cannot pad text (without
collate_fn
). One possible solution to this would be a custom collate_fn
in PyTorch butwe already have
torchtext
package with BucketIterator
that did preciselythis and back then I felt it would take too much time to implement. Hence,
caption_iterator()
is defined above in dataset.py
:Line 134 in df3b88a
Besides the iterator (datasetloader
variable) that automatically pads sentences to be of the same length, this caption_iterator()
outputs the
train_vocab
which can be used as token-to-word mapping which is also cumbersome to implement on your own because you need to build a vocabulary (ew!).
Anyway, the data items which we get from self.caption_loader
defined here:
Line 400 in df3b88a
, in turn, contain both a batch of padded tokenized captions and a batch of indices to the meta.csv (or filtered) rows and, hence, to
self.features_dataset
. Check out slicing here:Line 446 in df3b88a
which uses the same csv file (
meta
) which contains paths to precalculated video feats, start and end segment entries.
So, to get the video features, I need to iterate through the self.caption_loader
which I got from the caption_iterator()
to get the indices to the dataset rows (the paths to precalculated features). However, I can't index self.caption_loader
with an index somehow to retrieve these captions and indices – it fails. So, I came up with just
doing it in the opposite way: adding the dataset row index as a field in self.caption_loader
See here:
Line 182 in df3b88a
and here:
Lines 443 to 446 in df3b88a
Each
next()
call on the iterator returns a shuffled set of indices and captions (caption_data
). Then, these indices are used in self.features_dataset
which returns vid features for each indexentry in meta – see here:
Line 308 in df3b88a
Finally, the padding of features is performed in
AudioVideoFeaturesDataset
because BucketIterator
can digest only 'text' data but we also need to pad features.
As a result, the batch size is not 1 as it is defined in the PyTorch DataLoader
here:
Lines 220 to 224 in df3b88a
but rather the one defined in
caption_iterator()
defined here:Lines 209 to 211 in df3b88a
So, on a high level: I am wrapping a torchtext
BucketIterator
inside of the PyTorch DataLoader
with batch size = 1. This means that num_workers won't do much here because it loads a batch of one item.
This was the price of having text-padding out-of-the-box back then along with PyTorch dataset class.
Q: Have you ensured that the indices that caption_iterator returned with captions
(caption_data) are unique when you do caption_data = next(self.caption_loader_iter)
?
A: Yes, and they are shuffled after every epoch in update_iterator(self):
(defined here:
Lines 452 to 456 in df3b88a
from mdvc.
Oh, thanks a lot. I see what you're trying to do. Really nice work! I only have 2 threads assigned to this task when training the model so it took too much time in fetching dataset. I think I should use more threads if I want to improve the utilization of GPU. When I have 8 threads assigned to the task, it looks good.
from mdvc.
Related Issues (20)
- All the best for your CVPR 2020 workshop, Vladimir! :) HOT 1
- Dense Video Captioning on raw input videos HOT 23
- Requesting tensorboard log file for best model HOT 7
- videoCategoriesMetaUS.json HOT 4
- ASR HOT 1
- ASR HOT 3
- 503 service unavailable: cannot open a3s.fi links HOT 5
- ResolvePackageNotFound Issue HOT 9
- difference between val_1 and val_2 HOT 1
- Alignment key for the A/V features in the .npy/.hdf5 files HOT 3
- ResolvePackageNotFound: HOT 4
- I have two Issues - 1. I want to run this without GPU (i don't have GPU in Ubuntu) HOT 3
- RuntimeError: CUDA out of memory. HOT 2
- 作者您好,请问提取的多模态特征的时候需不需要区分是训练集的特征还是测试集的特征。
- Hello, author. May I ask whether it is necessary to distinguish the features of training set or test set when extracting multi-modal features? HOT 1
- About text HOT 1
- How to extract vggish features having overlapping? HOT 2
- Code to run model on own videos HOT 1
- Sharing I3D and VGGish features HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mdvc.