Giter Club home page Giter Club logo

five-video-classification-methods's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

five-video-classification-methods's Issues

frame level prediction

Hi Matt,

Thanks a lot for your great work. I have trained "inception model + lstm". In this example, the features of 40 frames are extracted first, and then sent into LSTM. I am wondering whether this example can make prediction for each frame. Could you please give me any tips?

Thanks

Setup Issues

Hi.
Didn't want to seem too nit picky, but this may come up as an issue for others as well.

Setup

In the installation, the unrar command is slightly off

Then extract it with unrar -e UCF101.rar.

Should be

Then extract it with unrar e UCF101.rar.

Missing Files

When running python 1_move_files.py

Traceback (most recent call last):
  File "1_move_files.py", line 81, in <module>
    main()
  File "1_move_files.py", line 75, in main
    group_lists = get_train_test_lists()
  File "1_move_files.py", line 20, in get_train_test_lists
    with open(test_file) as fin:
IOError: [Errno 2] No such file or directory: './ucfTrainTestlist/testlist01.txt

Thanks for the repo. It looks well laid out. Can't wait to play with the models.

TypeError: instance has no next() method

Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 630, in data_generator_task
generator_output = next(self._generator)
Traceback (most recent call last):
File "/home/a504/pycharm-community-2017.2.4/helpers/pydev/pydevd.py", line 1599, in
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/a504/pycharm-community-2017.2.4/helpers/pydev/pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/a504/PycharmProjects/keras_LRCN/train.py", line 111, in
main()
File "/home/a504/PycharmProjects/keras_LRCN/train.py", line 108, in main
load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
File "/home/a504/PycharmProjects/keras_LRCN/train.py", line 82, in train
workers=4)#表示使用线程数
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 1223, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 2083, in fit_generator
generator_output = next(output_generator)
StopIteration

I prepare the data and run train.py using LRCN model ,but it just get this error.I am new in this and don't know how to debug.I would very appreciate if someone could help.

Error in 'extractor.py' when trying to run 'extract_features.py'

Hi all.
When trying to extract features i get the following error.
(FYI: currently on windows)
C:\Users\HTL99\Desktop\DL>extract_features.py Using TensorFlow backend. PlayingDhol IceDancing GolfSwing ... JumpingJack HandstandPushups BenchPress Traceback (most recent call last): File "C:\Users\HTL99\Desktop\DL\extract_features.py", line 28, in <module> model = Extractor() File "C:\Users\HTL99\Desktop\DL\extractor.py", line 18, in __init__ include_top=True File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\applications\inception_v3.py", line 198, in InceptionV3 name='mixed0') File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\layers\merge.py", line 627, in concatenate return Concatenate(axis=axis, **kwargs)(inputs) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\topology.py", line 603, in __call__ output = self.call(inputs, **kwargs) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\layers\merge.py", line 347, in call return K.concatenate(inputs, axis=self.axis) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\backend\tensorflow_backend.py", line 1768, in concatenate return tf.concat([to_dense(x) for x in tensors], axis) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1075, in concat dtype=dtypes.int32).get_shape( File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 669, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 367, in make_tensor_proto _AssertCompatible(values, dtype) File "C:\Users\HTL99\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 302, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).__name__)) TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
Thanks for any help

error while running 1_move_files.py

When I run python 1_move_files.py, i get :

Traceback (most recent call last):
File "1_move_files.py", line 81, in
main()
File "1_move_files.py", line 78, in main
move_files(group_lists)
File "1_move_files.py", line 49, in move_files
filename = parts[1]
IndexError: list index out of range

Is there anything i did wrong ?

ffmpeg error

-i: 1: -i: ffmpeg: not found

I am using ubuntu 14.04.
I get this error followed by:
Generated 0 frames for v_SoccerPenalty_g16_c05

for all the videos.

move_files cant find files

I have downloaded the dataset in the data folder and made 4 directories as instructed . Now when i run the move_files.py it cant find any files . I didnt change the code a bit . after extraction the files are in following format. data->UCF-101->ApplyMakeup->videos

screenshot from 2017-09-03 16-45-14

UnboundLocalError: local variable 'epoch_logs' referenced before assignment

Im running the train.py but got the following errors,

Using TensorFlow backend.
Loading LSTM model.


Layer (type) Output Shape Param #

lstm_1 (LSTM) (None, 2048) 33562624


dense_1 (Dense) (None, 512) 1049088


dropout_1 (Dropout) (None, 512) 0


dense_2 (Dense) (None, 2) 1026

Total params: 34,612,738
Trainable params: 34,612,738
Non-trainable params: 0


None
2017-11-26 16:12:57.413854: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Creating train generator with 20 samples.
Epoch 1/2
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\utils\data_utils.py", line 560, in data_generator_task
generator_output = next(self._generator)
File "D:\Pycharm Projects\CPU\VideoClassificationDemo\data.py", line 25, in next
return next(self.iterator)
File "D:\Pycharm Projects\CPU\VideoClassificationDemo\data.py", line 191, in frame_generator
y.append(self.get_class_one_hot(sample[1]))
File "D:\Pycharm Projects\CPU\VideoClassificationDemo\data.py", line 103, in get_class_one_hot
assert len(label_hot) == len(self.classes)
AssertionError

Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\utils\data_utils.py", line 560, in data_generator_task
generator_output = next(self._generator)
File "D:\Pycharm Projects\CPU\VideoClassificationDemo\data.py", line 25, in next
return next(self.iterator)
StopIteration

Traceback (most recent call last):
File "D:/Pycharm Projects/CPU/VideoClassificationDemo/train.py", line 110, in
main()
File "D:/Pycharm Projects/CPU/VideoClassificationDemo/train.py", line 107, in main
load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
File "D:/Pycharm Projects/CPU/VideoClassificationDemo/train.py", line 81, in train
workers=4)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\models.py", line 1117, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras2\lib\site-packages\keras\engine\training.py", line 1877, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
UnboundLocalError: local variable 'epoch_logs' referenced before assignment

Process finished with exit code 1

why the extracted features have sequence in each of them?

Hello Mr

I am thankful to your code i am learning many from it.
I have a question as follows:
the code gives top-k score.
if i define k=5 then i get only top-5 score
how to get [top-1,top-2,top-3,...top-n] scores at the same time (in another word i want to plot rank)

Can't find sequence train.py

Hi, when i run the file train.py i get the following message:

Can't find sequence. Did you generate them?
Traceback (most recent call last):
File "train.py", line 114, in
main()
File "train.py", line 111, in main
load_to_memory=load_to_memory)
File "train.py", line 53, in train
X, y = data.get_all_sequences_in_memory(batch_size, 'train', data_type, concat)
File "/home/dario/five-video-classification-methods/data.py", line 117, in get_all_sequences_in_memory
raise

i am at a loss in this one, i already ran the previous code specified on the README, i ran the extract features.py for 2 classes only, dont know if it has anything to do with it

any guidance is appreciated

About sequences

After I ran train_cnn.py, I ran train.py as you say. But there're some errors.
ValueError: Can't find sequence. Did you generate them?
And I checked the sequences file, and it's empty, but it should saved features, isn't it?
I want to ask whether there're some code lost in train_cnn.py to save or some errors in train.py?
And how should I do to edit it?

How to allowing GPU memory growth

Hi,

I'd like to limit the memory of each GPU.
I found the following code and I paste it in train.py, but it doesn't work.
What am I doing wrong?
Thanks in advance.

if name == 'main':
config = tf.ConfigProto()
config.gpu_options.allow_growth = False
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
main()

set rm.model.fit_generator(shuffle=false)

First of all thanks for your great work!

When you use lstm in method 4 and you load batch with the fit_generator, if shuffle is True as default setting (see https://keras.io/models/sequential/) the batch will be mixed.
Is it correct?

I have this problem in my code, also can't set to false because it raise a TypeError: fit_generator() got an unexpected keyword argument 'shuffle'

I hope to understand this couse it's a bit confusing to me.

error in training the c3d model

hi, thx for your codes,
sry to interrupt
I got the error when training the c3d model(another 4 models is fine)

`Traceback (most recent call last):
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 110, in
main()
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 107, in main
load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 81, in train
workers=4)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/models.py", line 1227, in fit_generator
initial_epoch=initial_epoch)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2147, in fit_generator
class_weight=class_weight)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1839, in train_on_batch
outputs = self.train_function(ins)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2357, in call
**self.session_kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,40,40,40]
[[Node: training/Adam/gradients/pool1/MaxPool3D_grad/MaxPool3DGrad = MaxPool3DGrad[T=DT_FLOAT, TInput=DT_FLOAT, _class=["loc:@pool1/MaxPool3D"], data_format="NDHWC", ksize=[1, 1, 2, 2, 1], padding="VALID", strides=[1, 1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1/Relu, pool1/MaxPool3D, training/Adam/gradients/conv2/convolution_grad/Conv3DBackpropInputV2)]]

Caused by op 'training/Adam/gradients/pool1/MaxPool3D_grad/MaxPool3DGrad', defined at:
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 110, in
main()
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 107, in main
load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
File "/home/catidog/work/action_recognition/five-video-classification-methods/train.py", line 81, in train
workers=4)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/models.py", line 1227, in fit_generator
initial_epoch=initial_epoch)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2016, in fit_generator
self._make_train_function()
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 990, in _make_train_function
loss=self.total_loss)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/optimizers.py", line 415, in get_updates
grads = self.get_gradients(loss, params)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/optimizers.py", line 73, in get_gradients
grads = K.gradients(loss, params)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2394, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 581, in gradients
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 353, in _MaybeCompile
return grad_fn() # Exit early
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 581, in
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/nn_grad.py", line 155, in _MaxPool3DGrad
data_format=op.get_attr("data_format"))
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2939, in _max_pool3d_grad
data_format=data_format, name=name)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/catidog/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access`

plz give me some advice,thx a lot

Could you please add demo.py

Could you please add demo.py, which takes video as input and returns the top 5 activity classes.
Atleast, can you mention the steps involved, I will try and build the demo.

sequence folder is empty

I got an error when I start training since even if I follow the instruction, sequence folder is completely empty.

ValueError: No model found in config file.

When i run the file extract_features.py,I use the downloaded file inception_v3_weights_th_dim_ordering_th_kernels.h5 in my computer ,but it comes out one error at
File "C:\Anaconda2\lib\site-packages\keras\models.py", line 238, in load_model
model_config = f.attrs.get('model_config')
I have changed the backend to theano.I try to solve the problem, some guys says it may that the program don't define the architecture of "Inception V3".but i am not sure.please help. thank you!

Variable-length sequence

Hi Matt,

Thanks for your great work :)
How could the program be adapted to work with all the frames generated for different size videos?

Thanks!

The dimension of output of lstm dense2 is not the same as y

Here I run your newest version of lstm. I cannot get the training file running. The only difference is tensorflow version 1.4(maybe this is the problem?)

Loading LSTM model.
<keras.models.Sequential object at 0x7f2bbb497860>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_1 (LSTM)                (None, 40, 2048)          33562624  
_________________________________________________________________
flatten_1 (Flatten)          (None, 81920)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               41943552  
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 101)               51813     
=================================================================
Total params: 75,557,989
Trainable params: 75,557,989
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
  File "train.py", line 114, in <module>
    main()
  File "train.py", line 111, in main
    load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
  File "train.py", line 73, in train
    epochs=nb_epoch)
  File "/home/jhccc/tensorflow/lib/python3.5/site-packages/keras/models.py", line 960, in fit
    validation_steps=validation_steps)
  File "/home/jhccc/tensorflow/lib/python3.5/site-packages/keras/engine/training.py", line 1574, in fit
    batch_size=batch_size)
  File "/home/jhccc/tensorflow/lib/python3.5/site-packages/keras/engine/training.py", line 1411, in _standardize_user_data
    exception_prefix='target')
  File "/home/jhccc/tensorflow/lib/python3.5/site-packages/keras/engine/training.py", line 153, in _standardize_input_data
    str(array.shape))
**ValueError: Error when checking target: expected dense_2 to have shape (None, 101) but got array with shape (8596, 1)**


Error in extractor.py; while running extract_features.py

Hey!

I encountered the following error while running extract_features.py. Can you please help me out?

Traceback (most recent call last):
  File "extract_features.py", line 28, in <module>
    model = Extractor()
  File "/home/gaurav/Desktop/VideoClassification/extractor.py", line 18, in __init__
    include_top=True
  ...
  TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

The relevant section in extractor.py is:

14        if weights is None:
15            # Get model with pretrained weights.
16            base_model = InceptionV3(
17                weights='imagenet',
18                include_top=True
              )

Thanks in advance! :)

Error while using train.py

File "train.py", line 114, in
main()
File "train.py", line 111, in main
load_to_memory=load_to_memory)
File "train.py", line 53, in train
X, y = data.get_all_sequences_in_memory(batch_size, 'train', data_type, concat)
File "/home2/koushik/five-video-classification-methods-master/data.py", line 117, in get_all_sequences_in_memory
raise
TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType

Error in 2_extract_files.py

Got the below error while trying execute 2_extract_files.py:
Traceback (most recent call last):
File "2_extract_files.py", line 99, in
main()
File "2_extract_files.py", line 96, in main
extract_files()
File "2_extract_files.py", line 50, in extract_files
call(["ffmpeg", "-i", src, dest])
File "/usr/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 710, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

UCF101: site down

Hi,

Anyone can provide a temporary link for UCF101.rar ? Or send to me (i can host the file) because the official link are down.

Thanks

IndexError

In extractor.py at line 53 I get IndexError for features[0][0][0] for imagenet weights. I only see 2 dimensions when I try to print the features.
Also, can you give us a brief but precise stepwise guide as to in which order the files need to be executed? It gets a bit confusing after the 3 file in data.

Error on 2_extract_files

Hi, would love to try out the classification methods, but sadly I get an error on 2_extract_files:

Traceback (most recent call last):
File "2_extract_files.py", line 99, in
main()
File "2_extract_files.py", line 96, in main
extract_files()
File "2_extract_files.py", line 38, in extract_files
video_parts = get_video_parts(video_path)
File "2_extract_files.py", line 76, in get_video_parts
filename = parts[3]
IndexError: list index out of range

Did everything in the readme beforehand, all modules are installed and uptodate (I think?)!

Thank you for your help!

train_cnn.py

def main(weights_file):
....
model = get_top_layer_model(model)
model = train_model(model, 10, generators)

I think get_top_layer_model(model) should be get_top_layer_model(base_model) .
In your code, none of layers will be trained because all layers of model have been set " trainable = False"

"list index out of range" when running extract_features.py

Hello all
when trying to run extract_features.py I get the following error:

C:\Users\HTL99\Desktop\DL>extract_features.py Using TensorFlow backend. Traceback (most recent call last): File "C:\Users\HTL99\Desktop\DL\extract_features.py", line 25, in <module> data = DataSet(seq_length=seq_length, class_limit=class_limit) File "C:\Users\HTL99\Desktop\DL\data.py", line 50, in __init__ self.classes = self.get_classes() File "C:\Users\HTL99\Desktop\DL\data.py", line 82, in get_classes if item[1] not in classes: IndexError: list index out of range

Im currently on windows, so i needed to fix foldernames to foldernames/ etc, but still did not manage to run it.
Any ideas?
Thanks.

Can we use multiple GPUs with this code ?

I have two 1080ti GTX GPUs. I am interested in the model 3(lrcn). So is there a way we can do distributed training with the Keras wrapper . Training two batches in 2 GPUs and update the weights as we do in normal TF. I am very new to Keras. This will be really nice . Also I found this documentation in Keras about multi GPUs but it doesn't give much .

Multi GPU keras

Use binary numpy save/load instead of txt

When we extract features, we use savetxt, which means we have to use loadtxt when loading in the data gen, which is slow. Save the binary instead so we don't have to do the silly pandas load thing.

Issue running train.py for LSTM

Hi,
I have extracted sequences for just 4 classes, and then trying to run the train.py with LSTM model. I encountered the following error, and don't know where the problem is. I'd highly appreciate for any help on how I can fix this. I have already taken the latest code from this repository. Thanks in advance for the help.

Here is the error details:

train.py
Using TensorFlow backend.
Before clean_data method
After clean_data method
Loading LSTM model.


Layer (type) Output Shape Param #

lstm_1 (LSTM) (None, 2048) 33562624


dense_1 (Dense) (None, 512) 1049088


dropout_1 (Dropout) (None, 512) 0


dense_2 (Dense) (None, 4) 2052

Total params: 34,613,764
Trainable params: 34,613,764
Non-trainable params: 0


None
2017-11-29 23:19:13.503041: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-29 23:19:13.503041: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
Creating train generator with 375 samples.
Epoch 1/1000
Traceback (most recent call last):
File "C:/Users/cj127r/Documents/Directv/ML/Nano/capstone/train.py", line 111, in
main()
File "C:/Users/cj127r/Documents/Directv/ML/Nano/capstone/train.py", line 108, in main
load_to_memory=load_to_memory, batch_size=batch_size, nb_epoch=nb_epoch)
File "C:/Users/cj127r/Documents/Directv/ML/Nano/capstone/train.py", line 81, in train
workers=4)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\models.py", line 1117, in fit_generator
initial_epoch=initial_epoch)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\engine\training.py", line 1840, in fit_generator
class_weight=class_weight)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\engine\training.py", line 1559, in train_on_batch
check_batch_axis=True)
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\engine\training.py", line 1238, in _standardize_user_data
exception_prefix='target')
File "C:\ProgramData\Anaconda2\envs\python36\lib\site-packages\keras\engine\training.py", line 128, in _standardize_input_data
str(array.shape))
ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (32, 1, 4)

Process finished with exit code 1

2_extract_files.py not working

When I run python 2_extract_files.py, this is what I get:

Traceback (most recent call last):
File "2_extract_files.py", line 99, in
main()
File "2_extract_files.py", line 96, in main
extract_files()
File "2_extract_files.py", line 38, in extract_files
video_parts = get_video_parts(video_path)
File "2_extract_files.py", line 76, in get_video_parts
filename = parts[3]
IndexError: list index out of range

I already have ffmpeg installed and added it to my system path. What could be wrong?

ffmpeg error

hi i am getting the following error when running the extract_ file .py

Traceback (most recent call last):
File "2_extract_files.py", line 98, in
main()
File "2_extract_files.py", line 95, in main
extract_files()
File "2_extract_files.py", line 49, in extract_files
call(['ffmpeg', "-i", src, dest])
File "/usr/lib/python3.5/subprocess.py", line 557, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.5/subprocess.py", line 947, in init
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.