Giter Club home page Giter Club logo

keras-depthwiseconv3d's Introduction

Hi there Wave Emoji

I am an Assistant Professor at University of Twente's Data Management & Biometrics (DMB) group. Previously, I was a Postdoc at VUB and a Research Associate at the University of Bristol working with Dima Damen on video understanding. I obtained my PhD from Utrecht University where I was lucky to be supervised by Ronald Poppe and Remco C. Veltkamp. My thesis was on human action and interaction recognition in everyday social settings. During my time in Utrecht, I additionally worked on improving the efficiency and interpretability of spatiotemporal deep learning video classification models.

Alex Stergiou | webpage Alex Stergiou | google scholar Alex Stergiou | UT webpage Alex Stergiou | Twitter X Alex Stergiou | Linkedin

keras-depthwiseconv3d's People

Contributors

alexandrosstergiou avatar danganea avatar zfturbo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

keras-depthwiseconv3d's Issues

loading a model with this layer

Hello, thank you for the code!
I can not load a model using your layer.
I am running

 load_model('model.h5',
              custom_objects={'DepthwiseConv3D': DepthwiseConv3D})

and I am getting "TypeError: unorderable types: NoneType() > int()", raised at

if (self.groups > self.input_dim):
           raise ValueError('The number of groups cannot exceed the number of channels')

Any idea what is going wrong? Thank you

Please check

Thanks for the awesome work but I think there might be some issues in the implementation as the number of parameters don't add up. I implemented depth wise convolution with a depth multiplier of 32 in pure TensorFlow strictly following the xception paper but I am seeing way more parameters than your implementation (400 million vs 7 million). With a depth multiplier that large it is impossible to have such a small amount of weights (esp for 3d models)

AttributeError: 'tuple' object has no attribute 'layer'

I installed keras 2.2.4 and tensorflow 1.14, if I tried like this:
input = tf.ones((1, 256, 256, 32, 3), dtype=tf.float32)
x = DepthwiseConv3D(kernel_size=(3, 3, 3), depth_multiplier=2)(input)
the result is ok.
but when I tried like:
def conv_separable(net, width, ksize=ksize, s=1, d=1):
ret = DepthwiseConv3D(kernel_size=(ksize,ksize,ksize),
strides=(s, s, s),
dilation_rate=(d, d, d),
padding='SAME',
activation='relu'
)(net)
ret = conv(ret, width, 1, 1, 1)
return ret
it will record error:
inbound_layers = nest.map_structure(lambda t: t._keras_history.layer,
AttributeError: 'tuple' object has no attribute 'layer'
I don't know why.

Any pre-trained weights provided?

Hey thank you for your work. Have you evaluated this model on any video datasets like UCF-101 or HMDB-51, and how much the accuracy is? If you evaluate it on bigger dataset like Kinetics, could you please provided a pre-trained weights, I really need it because of the limitation of computing resource. Thank you!

channels_last support ?

Hi,

In your readme you are saying that your function supports only channels_first volume, but in your code I see that you are using channel_last forself._data_format = "NDHWC" ? So is your code supporting channels_last ?

Thanks for your contribution.

Depthwise atrous convolution

Hi

Thanks for sharing your work.

If i increase the dilation rate, then the operation would be depthwise atrous convolution correct?

as a side question, any idea why when using dilation rate >1 i cannot use stride >1?

Thanks

Using dilation gives an error

I want to use your code for deeplab but in 3d, my issue is that when I use dilatation over 1, I get No algorithm worked!. I'm using tensorflow 2.1 and I also used this in google colab which gives me a different error. Thank you for your awesome work

from DepthwiseConv3D import DepthwiseConv3D as DepthwiseConv3D

input1 = Input((128,128,128,1))

x = DepthwiseConv3D(3, padding='same', dilation_rate=(2,2,2))(input1)

model=Model(input1,x)

volume=np.zeros((1,128,128,128,1))

model.predict(volume)


The code gave me the error:

----> 1 model.predict(volume)

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1011         max_queue_size=max_queue_size,
   1012         workers=workers,
-> 1013         use_multiprocessing=use_multiprocessing)
   1014 
   1015   def reset_metrics(self):

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in predict(self, model, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
    496         model, ModeKeys.PREDICT, x=x, batch_size=batch_size, verbose=verbose,
    497         steps=steps, callbacks=callbacks, max_queue_size=max_queue_size,
--> 498         workers=workers, use_multiprocessing=use_multiprocessing, **kwargs)
    499 
    500 

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _model_iteration(self, model, mode, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, **kwargs)
    473               mode=mode,
    474               training_context=training_context,
--> 475               total_epochs=1)
    476           cbks.make_logs(model, epoch_logs, result, mode)
    477 

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    126         step=step, mode=mode, size=current_batch_size) as batch_logs:
    127       try:
--> 128         batch_outs = execution_function(iterator)
    129       except (StopIteration, errors.OutOfRangeError):
    130         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in execution_function(input_fn)
     96     # `numpy` translates Tensors to values in Eager mode.
     97     return nest.map_structure(_non_none_constant_value,
---> 98                               distributed_function(input_fn))
     99 
    100   return execution_function

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
    566         xla_context.Exit()
    567     else:
--> 568       result = self._call(*args, **kwds)
    569 
    570     if tracing_count == self._get_tracing_count():

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\def_function.py in _call(self, *args, **kwds)
    636               *args, **kwds)
    637       # If we did not create any variables the trace we have is good enough.
--> 638       return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
    639 
    640     def fn_with_cond(*inner_args, **inner_kwds):

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\function.py in _filtered_call(self, args, kwargs)
   1609          if isinstance(t, (ops.Tensor,
   1610                            resource_variable_ops.BaseResourceVariable))),
-> 1611         self.captured_inputs)
   1612 
   1613   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1690       # No tape is watching; skip to running the function.
   1691       return self._build_call_outputs(self._inference_function.call(
-> 1692           ctx, args, cancellation_manager=cancellation_manager))
   1693     forward_backward = self._select_forward_and_backward_functions(
   1694         args,

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\function.py in call(self, ctx, args, cancellation_manager)
    543               inputs=args,
    544               attrs=("executor_type", executor_type, "config_proto", config),
--> 545               ctx=ctx)
    546         else:
    547           outputs = execute.execute_with_cancellation(

~\Miniconda3\envs\nnseries\lib\site-packages\tensorflow_core\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     keras_symbolic_tensors = [

~\Miniconda3\envs\nnseries\lib\site-packages\six.py in raise_from(value, from_value)

NotFoundError:  No algorithm worked!
	 [[node model_8/depthwise_conv3d_2/Conv3D (defined at C:\Users\PC\DepthwiseConv3D.py:244) ]] [Op:__inference_distributed_function_959]

Function call stack:
distributed_function

[question] Is this a channel-wise convolution ?

Hi,

I just needed a clarification about your function.
In my application I have a tensor that is the concatenation of two 3D volumes, so size [height, width, depth, n_channels=2]. I want to apply n_filters 3D kernels [3, 3, 3] to each of the volumes, so the output tensor would be [new_height, new_width, new_depth, n_filters, n_channels=2], is this what your function does ?

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.