Giter Club home page Giter Club logo

imagine-nn's Introduction

imagine-nn

Universite Paris-Est Marne-la-Vallee IMAGINE/LIGM torch neural network routines

Following modules are here for now:

inn.SpatialStochasticPooling(kW,kH,dW,dH)
inn.SpatialSameResponseNormalization([size = 3], [alpha = 0.00005], [beta = 0.75])
inn.MeanSubtraction(mean)
inn.SpatialPyramidPooling({{w1,h1},{w2,h2},...,{wn,hn}})
inn.ROIPooling(W,H):setSpatialScale(scale)

Look at http://arxiv.org/abs/1301.3557 for inn.SpatialStochasticPooling reference, this is fully working implementation.

inn.ROIPooling is Spatial Adaptive Max Pooling layer for region proposals used in FastRCNN with bugfixes and 50 times faster in backprop. Set v2 = false to use it's old version. inn.ROIPooling expects a table on input, first argument is features in NxDxHxW where N is number of images, second argument is bounding boxes in Bx5 where B is the number of regions to pool and 5 is image id + bbox. Image id is in [1,N] range, boxes are in [x1,y1,x2,y2].

inn.SpatialSameResponseNormalization is a local response normalization in the same map in BDHW format. For details refer to https://code.google.com/p/cuda-convnet/wiki/LayerParams#Local_response_normalization_layer_(same_map)

inn.MeanSubtraction(mean) is done to subtract the Imagenet mean directly on GPU. Mean tensor is expanded to BDHW batches without using additional memory.

inn.SpatialPyramidPooling({{w1,h1},{w2,h2},...,{wn,hn}}) is a pyramid of regions obtained by using Spatial Adaptive Max Pooling with parameters (w1,h1),...,(wn,hn) in the input. The result is a fixed-sized vector of size w1*h1*...wn*hn for any input dimension. For details see http://arxiv.org/abs/1406.4729

OBSOLETE modules

The difference with inn.SpatialMax(Average)Pooling and nn.SpatialMax(Average)Pooling is that output size computed with ceil instead of floor (as in Caffe and cuda-convnet2). Also SpatialAveragePooling does true average pooling, meaning that it divides outputs by kW*kH. inn.SpatialMax(Average)Pooling(kW,kH,dW,dH) is equal to cudnn.SpatialMax(Average)Pooling(kW,kH,dW,dH):ceil().

inn.SpatialCrossResponseNormalization is local response normalization across maps in BDHW format (thanks to Caffe!). For details refer to https://code.google.com/p/cuda-convnet/wiki/LayerParams#Local_response_normalization_layer_(across_maps)

inn.SpatialMaxPooling(kW,kH,dW,dH)
-- OBSOLETE! USE nn.SpatialMaxPooling(kW,kH,dW,dH,padW,padH):ceil()
inn.SpatialAveragePooling(kW,kH,dW,dH)
-- OBSOLETE! USE nn.SpatialAveragePooling(kW,kH,dW,dH,padW,padH):ceil()
inn.SpatialCrossResponseNormalization(size, [alpha = 0.0001], [beta = 0.75], [k = 1])
-- OBSOLETE! USE nn.SpatialCrossMapLRN with the same arguments

imagine-nn's People

Contributors

fmassa avatar jhjin avatar mys007 avatar redzhepdx avatar shivak avatar szagoruyko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imagine-nn's Issues

can't install on os x 10.10.5

running luarocks install inn
i get the following errors (and weird terminal output).
any help appreciated

IInn ffiillee iinncclluuddeedd ffrroomm <>::333322::

IInn ffiillee iinncclluuddeedd ffrroomm <>::1133::

IInn ffiillee iinncclluuddeedd ffrroomm //uussrr//llooccaall//ccuuddaa//iinncclluuddee//ccuuddaa__rruunnttiimmee..hh::111122::

//uussrr//llooccaall//ccuuddaa//iinncclluuddee//ccoommmmoonn__ffuunnccttiioonnss..hh::6655::1100:: ffaattaall eerrrroorr:: ''ssttrriinngg..hh'' ffiillee nnoott ffoouunndd

iinncclluuddee <<ssttrriinngg..hh>>

              ^^

In Ifni lfei lien cilnucdleudd efdr ofmr o<mb ui:n3>3:23:3
2I:n
Ifni lfei lien cilnucdleudd efdr ofmr o<mc on:e1>3::1
3I:n
Ifni lfei lien cilnucdleudd efdr ofmr o/mu s/ru/slro/claolc/aclu/dcau/dian/cilnucdleu/dceu/dcau_drau_nrtuinmtei.mhe:.1h1:21:1
2/:u
s/ru/slro/claolc/aclu/dcau/dian/cilnucdleu/dceo/mcmoomnm_ofnu_nfcutnicotniso.nhs:.6h5::6150::1 0f:a tfaalt aelr reorrr:o r':s t'rsitnrgi.nhg'. hf'i lfei lneo tn ofto ufnodu
n#di
n#cilude <string.nh>
c l u d e < s t r^i
ng.h>
^
In file included frIomn u:d3e3d2 :f
rIonm fd:e3d3 2f:r
oImn :f1r3o:m
Ie:d1 3f:r
oImn /fuislre/ lionccallu/dceudd af/rionmc l/uudser//cluodcaa_lr/ucnutdiam/ei.nhc:l1u1d2e:/
c/uudsar_/rluonctailm/ec.uhd:a1/1i2n:c
l/uudser//cloomcmaoln/_cfuudnac/tiinocnlsu.dhe:/6c5o:m1m0o:n _ffautnaclt ieornrso.rh:: 6'5s:t1r0i:n gf.aht'a lf ielrer onro:t 'fsoturnidn
g#.ihn'c lfuidlee

i n c l u d e <^s

tring.h>
I^
n file included from In:c3l3u2d:e
dI nf rfoiml e< biuniclltu-diend> :f3r3o2m:
d:e1d from :13:
In file included from /usr/local/cuda/include/cuda_runtime.h:112:
/usr/local/cuda/include/common_functions.h:65:10: fatal error: 'string.h' file not found

include <string.h>

     ^

3:
In file included from /usr/local/cuda/include/cuda_runtime.h:112:
/usr/local/cuda/include/common_functions.h:65:10: fatal error: 'string.h' file not found

include <string.h>

     ^

1 error generated.
CMake Error at THC_generated_THCBlas.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCBlas.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
1 error generated.
CMake Error at THC_generated_THCStorageCopy.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCStorageCopy.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o] Error 1
1 error generated.
1 error generated.
CMake Error at THC_generated_THCTensor.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensor.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o] Error 1
CMake Error at THC_generated_THCReduceApplyUtils.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCReduceApplyUtils.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o] Error 1
1 error generated.
CMake Error at THC_generated_THCTensorCopy.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorCopy.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o] Error 1
1 error generated.
CMake Error at THC_generated_THCTensorMath.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMath.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensorMath.cu.o] Error 1
1 error generated.
CMake Error at THC_generated_THCStorage.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCStorage.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o] Error 1
1 error generated.
CMake Error at THC_generated_THCTensorTopK.cu.o.cmake:207 (message):
Error generating
/tmp/luarocks_cutorch-scm-1-3131/cutorch/build/lib/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorTopK.cu.o

make[2]: *** [lib/THC/CMakeFiles/THC.dir/THC_generated_THCTensorTopK.cu.o] Error 1
make[1]: *** [lib/THC/CMakeFiles/THC.dir/all] Error 2
make: *** [all] Error 2

Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/cunn-scm-1.rockspec - Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec - Build error: Failed building.

Failed Installing

Hi,

I have problems install the repository. I cloned it, and here is the output of luarocks make :

luarocks make
cmake -E make_directory build;
cd build;
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/home/trottier/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/trottier/torch/install/lib/luarocks/rocks/inn/1.0-0"

-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Torch7 in /home/trottier/torch/install
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr (found suitable version "7.5", minimum required is "6.5") 
-- Compiling for CUDA architecture: 3.5
-- Configuring done
-- Generating done
-- Build files have been written to: /home/trottier/Downloads/imagine-nn/build
cd build && make install
[ 33%] Building NVCC (Device) object CMakeFiles/inn.dir/inn_generated_SpatialStochasticPooling.cu.o
CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THStorage.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THStorageCopy.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THCStorage.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THCStorageCopy.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensor.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensorCopy.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensorRandom.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensorMath.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensorConv.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THTensorLapack.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THCTensor.h


CMake Warning at /usr/share/cmake-3.5/Modules/FindCUDA/make2cmake.cmake:65 (message):
   Removing non-existent dependency file: generic/THCTensorCopy.h


/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
   return (char *) memcpy (__dest, __src, __n) + __n;
                                          ^
CMake Error at inn_generated_SpatialStochasticPooling.cu.o.cmake:266 (message):
  Error generating file
  /home/trottier/Downloads/imagine-nn/build/CMakeFiles/inn.dir//./inn_generated_SpatialStochasticPooling.cu.o


CMakeFiles/inn.dir/build.make:70: recipe for target 'CMakeFiles/inn.dir/inn_generated_SpatialStochasticPooling.cu.o' failed
make[2]: *** [CMakeFiles/inn.dir/inn_generated_SpatialStochasticPooling.cu.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/inn.dir/all' failed
make[1]: *** [CMakeFiles/inn.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

Error: Build error: Failed installing.

ROIPooling module backwards pass is non-deterministic

I noticed that I wasn't getting repeatable training results and traced the problem to the backwards pass of the ROIPooling module. It appears it's non-deterministic. Why is this- is this expected? Is there anything that can be done to fix this?

SpatialCrossResponseNormalization bug

When the input tensor is too big, number of blocks exceeds 65535, which causes crashes when compiling with arch=sm_20. That causes issues when trying to loadcaffe VGG_M network with 'cudnn', which uses inn.SCRN .

Any reason for not using sm_35 ?

the issue of initializing SpatialPyramidPooling

I met a problem when I call SpatialPyramidPooling.

/usr/local/share/lua/5.1/inn/SpatialPyramidPooling.lua:12: attempt to call field 'Contiguous' (a nil value).

part of my codes are:
`model:add(nn.SpatialConvolutionMM(nstates[1], nstates[2], filtsize, filtsize))
model:add(nn.ReLU())

model:add(inn.SpatialPyramidPooling({{6,6},{4,4},{2,2}}))`

How can I solve the problem? Thanks a lot.

Problem in compiling with CUDA 7.5 on Ubuntu 16.04

When makefile after cmake preprocessing, get this error message

/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
   return (char *) memcpy (__dest, __src, __n) + __n;
                                          ^
CMake Error at inn_generated_SpatialStochasticPooling.cu.o.cmake:267 (message):
  Error generating file
  /home/samson/Repo/imagine-nn/build/CMakeFiles/inn.dir//./inn_generated_SpatialStochasticPooling.cu.o


CMakeFiles/inn.dir/build.make:70: recipe for target 'CMakeFiles/inn.dir/inn_generated_SpatialStochasticPooling.cu.o' failed
make[2]: *** [CMakeFiles/inn.dir/inn_generated_SpatialStochasticPooling.cu.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/inn.dir/all' failed
make[1]: *** [CMakeFiles/inn.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

From this issue, BVLC/caffe/issues/4046 , it seems to be an incompatibility between g++-5.0 and CUDA 7.5.

I'm not familiar with the cmake tool chain. I just add -D_FORCE_INLINES to CUDA_NVCC_FLAGS in CMakeCache.txt to solve the compile problem. The diffs as follow.

207c207
< CUDA_NVCC_FLAGS:STRING=-D_FORCE_INLINES

---
> CUDA_NVCC_FLAGS:STRING=

encountering error when converting resnet-50 torch model by facebook

When I used the script on https://gist.github.com/szagoruyko/8828e09cc4687afd324d, which used the utils.lua, to convert the facebook resnet-50 model file (https://github.com/facebook/fb.resnet.torch), I had the following error:

Couldnt fold these: {
1 :
{
gradBias : CudaTensor - size: 256
bias : CudaTensor - size: 256
output : CudaTensor - size: 10x256x56x56
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 256
running_var : CudaTensor - size: 256
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 256
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 256
gradWeight : CudaTensor - size: 256
save_std : CudaTensor - size: 256
train : false
}
2 :
{
gradBias : CudaTensor - size: 256
bias : CudaTensor - size: 256
output : CudaTensor - size: 10x256x56x56
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 256
running_var : CudaTensor - size: 256
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 256
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 256
gradWeight : CudaTensor - size: 256
save_std : CudaTensor - size: 256
train : false
}
3 :
{
gradBias : CudaTensor - size: 256
bias : CudaTensor - size: 256
output : CudaTensor - size: 10x256x56x56
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 256
running_var : CudaTensor - size: 256
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 256
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 256
gradWeight : CudaTensor - size: 256
save_std : CudaTensor - size: 256
train : false
}
4 :
{
gradBias : CudaTensor - size: 512
bias : CudaTensor - size: 512
output : CudaTensor - size: 10x512x28x28
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 512
running_var : CudaTensor - size: 512
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 512
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 512
gradWeight : CudaTensor - size: 512
save_std : CudaTensor - size: 512
train : false
}
5 :
{
gradBias : CudaTensor - size: 512
bias : CudaTensor - size: 512
output : CudaTensor - size: 10x512x28x28
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 512
running_var : CudaTensor - size: 512
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 512
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 512
gradWeight : CudaTensor - size: 512
save_std : CudaTensor - size: 512
train : false
}
6 :
{
gradBias : CudaTensor - size: 512
bias : CudaTensor - size: 512
output : CudaTensor - size: 10x512x28x28
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 512
running_var : CudaTensor - size: 512
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 512
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 512
gradWeight : CudaTensor - size: 512
save_std : CudaTensor - size: 512
train : false
}
7 :
{
gradBias : CudaTensor - size: 512
bias : CudaTensor - size: 512
output : CudaTensor - size: 10x512x28x28
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 512
running_var : CudaTensor - size: 512
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 512
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 512
gradWeight : CudaTensor - size: 512
save_std : CudaTensor - size: 512
train : false
}
8 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
9 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
10 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
11 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
12 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
13 :
{
gradBias : CudaTensor - size: 1024
bias : CudaTensor - size: 1024
output : CudaTensor - size: 10x1024x14x14
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 1024
running_var : CudaTensor - size: 1024
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 1024
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 1024
gradWeight : CudaTensor - size: 1024
save_std : CudaTensor - size: 1024
train : false
}
14 :
{
gradBias : CudaTensor - size: 2048
bias : CudaTensor - size: 2048
output : CudaTensor - size: 10x2048x7x7
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 2048
running_var : CudaTensor - size: 2048
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 2048
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 2048
gradWeight : CudaTensor - size: 2048
save_std : CudaTensor - size: 2048
train : false
}
15 :
{
gradBias : CudaTensor - size: 2048
bias : CudaTensor - size: 2048
output : CudaTensor - size: 10x2048x7x7
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 2048
running_var : CudaTensor - size: 2048
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 2048
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 2048
gradWeight : CudaTensor - size: 2048
save_std : CudaTensor - size: 2048
train : false
}
16 :
{
gradBias : CudaTensor - size: 2048
bias : CudaTensor - size: 2048
output : CudaTensor - size: 10x2048x7x7
gradInput : CudaTensor - empty
save_mean : CudaTensor - size: 2048
running_var : CudaTensor - size: 2048
momentum : 0.1
eps : 1e-05
weight : CudaTensor - size: 2048
_type : "torch.CudaTensor"
affine : true
running_mean : CudaTensor - size: 2048
gradWeight : CudaTensor - size: 2048
save_std : CudaTensor - size: 2048
train : false
}
}

Any idea how to resolve it?

ROIPooling layer with v2 gives me wrong output

Hi, this is Jaehyun Lim.

I tried to test ROIPooling layer with simple custom code, and the ROIPooling layer with v2 gave me wrong outputs.

Here is the code I ran;

local inn = require 'inn'

local n_images = 2
local sz = torch.Tensor{2, 5, 5}
local input_image = torch.CudaTensor(n_images, sz[1], sz[2], sz[3]):copy(torch.linspace(1, n_images * sz[1] * sz[2] * sz[3], n_images * sz[1] * sz[2] * sz[3]):reshape(n_images, sz[1], sz[2], sz[3]))

print(input_image)

local n_rois = 2
local rois=torch.CudaTensor(n_rois,5)
for i=1,n_rois do
  idx=torch.randperm(n_images)[1]
  y=torch.randperm(sz[3])[{{1,2}}]:sort()
  x=torch.randperm(sz[2])[{{1,2}}]:sort()
  rois[{i,{}}] = torch.Tensor({idx,x[1],y[1],x[2],y[2]})
  --rois[{i,{}}] = torch.Tensor({idx,1,1,sz[3],sz[2]})
end

print(rois)

local model = inn.ROIPooling(3,3)
model:cuda()

local output = model:forward({input_image, rois})
print(output)

model.v2 = false
local output = model:forward({input_image, rois})
print(output)

and I got

input_image = 
(1,1,.,.) = 
    1    2    3    4    5
    6    7    8    9   10
   11   12   13   14   15
   16   17   18   19   20
   21   22   23   24   25

(2,1,.,.) = 
   51   52   53   54   55
   56   57   58   59   60
   61   62   63   64   65
   66   67   68   69   70
   71   72   73   74   75

(1,2,.,.) = 
   26   27   28   29   30
   31   32   33   34   35
   36   37   38   39   40
   41   42   43   44   45
   46   47   48   49   50

(2,2,.,.) = 
   76   77   78   79   80
   81   82   83   84   85
   86   87   88   89   90
   91   92   93   94   95
   96   97   98   99  100
[torch.CudaTensor of size 2x2x5x5]

rois = 
1  1  1  4  5
2  1  2  3  4
[torch.CudaTensor of size 2x5]

output via v2 = 
(1,1,.,.) = 
   1   2   3
  11  12  13
  16  17  18

(2,1,.,.) = 
  56   0  57
   0   0   0
  61   0  62

(1,2,.,.) = 
  26  27  28
  36  37  38
  41  42  43

(2,2,.,.) = 
  81   0  82
   0   0   0
  86   0  87
[torch.CudaTensor of size 2x2x3x3]

output via v1 = 
(1,1,.,.) = 
   7   8   9
  17  18  19
  22  23  24

(2,1,.,.) = 
  56  57  58
  61  62  63
  66  67  68

(1,2,.,.) = 
  32  33  34
  42  43  44
  47  48  49

(2,2,.,.) = 
  81  82  83
  86  87  88
  91  92  93
[torch.CudaTensor of size 2x2x3x3]

I'm using lua 5.2 instead of luajit 2.1 because of memory issues related to luajit.

I can use v1, but I'd like to know there was any mistake on my usage or a sort of things.

I would appreciate if anyone help me to solve this issue.

Thanks

Best regards,

Jaehyun

Looking for helps for ROIWarping layer

Hi, this is Jaehyun Lim

I'm sure this is not a proper post in this issue session. However, I just leave this post for looking for helps.

I am currently working on re-implementing the winning solution of ILSVRC & MSCOCO 2015 competition (http://arxiv.org/abs/1512.04412).

As a part of the work, I made the ROI warping layer described in the paper. (see PR #37)

Unfortunately, the layer is not complete though. However, I want to share it so that somebody can find the bugs that I couldn't find it until now.......

Note:

  1. The equation (8) in the paper seems to make no sense for me. Thus, I made bilinear warping/mapping as described in Wikipedia.
  2. The backprop (gradient) w.r.t. Image input(or blob) works fine (confirmed with test_jacobian), but the backprop w.r.t. delta_rois (fast-rcnn style rois re-parameterization w.r.t anchors) couldn't make it.

The backprop w.r.t delta_rois are a bit off from the nn.Jacobian.forward. I have tried to found what I might miss, but I couldn't find it until now.

I would appreciate if someone helps me find the bugs.

Best regards,

Jaehyun

SpatialSameResponseNormalization doesn't pass test

Running 4 tests
___*  ==> Done Completed 16 asserts in 4 tests with 2 errors
--------------------------------------------------------------------------------
SpatialSameResponseNormalization
error on state
 LT(<) violation   val=nan, condition=0.001
    /home/zagoruys/torch/install/share/lua/5.1/torch/Tester.lua:26: in function 'assertlt'
    test/test_jacobian.lua:146: in function <test/test_jacobian.lua:137>

--------------------------------------------------------------------------------
SpatialSameResponseNormalization
error on state (Batch)
 LT(<) violation   val=nan, condition=0.001
    /home/zagoruys/torch/install/share/lua/5.1/torch/Tester.lua:26: in function 'assertlt'
    test/test_jacobian.lua:154: in function <test/test_jacobian.lua:137>

--------------------------------------------------------------------------------

@fmassa any ideas why?

inn.SpatialSameResponseNormalization.listModules() not working

The following code

require 'inn'
local m=inn.SpatialSameResponseNormalization(3,0.00005,0.75)
print(m:listModules())

gives an error

/home/tingfan/dnn/torch/install/share/lua/5.1/nn/Module.lua:286: attempt to index a nil value
stack traceback:
        /home/tingfan/dnn/torch/install/share/lua/5.1/nn/Module.lua:286: in function 'listModules'
        [string "print(m:listModules())"]:1: in main chunk
        [C]: in function 'xpcall'
        ...e/tingfan/dnn/torch/install/share/lua/5.1/trepl/init.lua:648: in function 'repl'
        .../dnn/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
        [C]: at 0x00406670

because the inherited listModule() assumes self:modules is an [] index-able array.

One way to fix it it to add its own listModule() member function

function inn.SpatialSameResponseNormalization:listModules()
    return {m.modules}
end

Or making nn.Sequential.__index() call Sequential:get(). I would prefer the later one but not sure if it is legit.

Quick question regarding inn.ROIPooling

Readme says that " Image id is in [1,N] range, boxes are in [x1,y1,x2,y2]."
Does the co-ordinate indexing start at (0,0) or (1,1) ?
Does xi correspond to i-th row(Matlab format) or i-th column ?

SpatialStochasticPooling: memory leak

THCudaTensor_free(state, input) is missing in SpatialStochasticPooling_updateOutput. SpatialStochasticPooling_updateGradInput should check for non-contiguous gradOutput.

Test error on SpatialPyramidPooling and fail on SpatialSameResponseNormalization

Hi, this is Jaehyun Lim.

I installed the recent version of inn (4c2dc1c), and I tried to run test/test_jacobian.lua.

I got following messages;

1/4 ROIPooling .......................................................... [PASS]
2/4 SpatialPyramidPooling ............................................... [ERROR]
3/4 SpatialStochasticPooling ............................................ [PASS]
4/4 SpatialSameResponseNormalization .................................... [FAIL]
Completed 11 asserts in 4 tests with 1 failure and 1 error
--------------------------------------------------------------------------------
SpatialPyramidPooling
 Function call failed
...hyun/github/torch/install/share/lua/5.1/nn/Container.lua:69: 
In 1 module of inn.SpatialPyramidPooling:
In 3 module of nn.Sequential:
...yun/github/torch/install/share/lua/5.1/nn/Contiguous.lua:13: attempt to index local 'gradOutput' (a nil value)
stack traceback:
        ...yun/github/torch/install/share/lua/5.1/nn/Contiguous.lua:13: in function <...yun/github/torch/install/share/lua/5.1/nn/Contiguous.lua:12>
        [C]: in function 'xpcall'
        ...hyun/github/torch/install/share/lua/5.1/nn/Container.lua:65: in function 'rethrowErrors'
        ...yun/github/torch/install/share/lua/5.1/nn/Sequential.lua:55: in function <...yun/github/torch/install/share/lua/5.1/nn/Sequential.lua:50>
        [C]: in function 'xpcall'
        ...hyun/github/torch/install/share/lua/5.1/nn/Container.lua:65: in function 'rethrowErrors'
        ...jaehyun/github/torch/install/share/lua/5.1/nn/Concat.lua:37: in function 'updateGradInput'
        ...ehyun/github/torch/install/share/lua/5.1/nn/Jacobian.lua:21: in function 'backward'
        ...ehyun/github/torch/install/share/lua/5.1/nn/Jacobian.lua:233: in function 'testJacobian'
        test_jacobian.lua:62: in function <test_jacobian.lua:48>
        [C]: in function 'xpcall'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:476: in function '_pcall'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:436: in function '_run'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:354: in function 'run'
        test_jacobian.lua:157: in main chunk
        [C]: in function 'dofile'
        ...thub/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

WARNING: If you see a stack trace below, it doesn't point to the place where this error occured. Please use only the one above.
stack traceback:
        [C]: in function 'error'
        ...hyun/github/torch/install/share/lua/5.1/nn/Container.lua:69: in function 'rethrowErrors'
        ...jaehyun/github/torch/install/share/lua/5.1/nn/Concat.lua:37: in function 'updateGradInput'
        ...ehyun/github/torch/install/share/lua/5.1/nn/Jacobian.lua:21: in function 'backward'
        ...ehyun/github/torch/install/share/lua/5.1/nn/Jacobian.lua:233: in function 'testJacobian'
        test_jacobian.lua:62: in function <test_jacobian.lua:48>
        [C]: in function 'xpcall'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:476: in function '_pcall'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:436: in function '_run'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:354: in function 'run'
        test_jacobian.lua:157: in main chunk
        [C]: in function 'dofile'
        ...thub/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

--------------------------------------------------------------------------------
SpatialSameResponseNormalization
error on state 
LT failed: nan >= 0.001
stack traceback:
        test_jacobian.lua:91: in function <test_jacobian.lua:82>
--------------------------------------------------------------------------------
SpatialSameResponseNormalization
error on state (Batch) 
LT failed: nan >= 0.001
stack traceback:
        test_jacobian.lua:99: in function <test_jacobian.lua:82>
--------------------------------------------------------------------------------
/home/jaehyun/github/torch/install/bin/luajit: ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:362: An error was found while running tests!
stack traceback:
        [C]: in function 'assert'
        ...hyun/github/torch/install/share/lua/5.1/torch/Tester.lua:362: in function 'run'
        test_jacobian.lua:157: in main chunk
        [C]: in function 'dofile'
        ...thub/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

Is there anyone who can help me to solve this problem? :)

Thanks

Best regards,

Jaehyun

thoughts on porting averagepooling ceil mode into opencl?

Hi Sergey,

Looks like some neural-style models could plausibly benefit from averagepooling. Therefore needs ceil mode. So, I'm pondering vaguely porting your inn averagepooling to cl somehow/somewhere. Seems like the easiest way to do this might be to fork inn itself, eg to 'clinn'? (Otherwise needs to port everything into nn, which whilst a good long-term goal, might take more than a few days probably I would think?)

Hugh

(PS I suppose one other way would be to port directly into clnn, and then hack neural-style's loadcaffe_wrapper.lua, that might be easier actually.... still pondering...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.