Giter Club home page Giter Club logo

torch7's Introduction

Join the chat at https://gitter.im/torch/torch7 Build Status

Development Status

Torch is not in active developement. The functionality provided by the C backend of Torch, which are the TH, THNN, THC, THCUNN libraries is actively extended and re-written in the ATen C++11 library (source, mirror). ATen exposes all operators you would expect from torch7, nn, cutorch, and cunn directly in C++11 and includes additional support for sparse tensors and distributed operations. It is to note however that the API and semantics of the backend libraries in Torch-7 are different from the semantice provided by ATen. For example ATen provides numpy-style broadcasting while TH* dont. For information on building the forked Torch-7 libraries in C, refer to "The C interface" in pytorch/aten/src/README.md.

Need help?

Torch7 community support can be found at the following locations. As of 2019, the Torch-7 community is close to non-existent.

Torch Package Reference Manual

Torch is the main package in Torch7 where data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for accessing files, serializing objects of arbitrary types and other useful utilities.

Torch Packages

  • Tensor Library
    • Tensor defines the all powerful tensor object that provides multi-dimensional numerical arrays with type templating.
    • Mathematical operations that are defined for the tensor object types.
    • Storage defines a simple storage interface that controls the underlying storage for any tensor object.
  • File I/O Interface Library
  • Useful Utilities
    • Timer provides functionality for measuring time.
    • Tester is a generic tester framework.
    • CmdLine is a command line argument parsing utility.
    • Random defines a random number generator package with various distributions.
    • Finally useful utility functions are provided for easy handling of torch tensor types and class inheritance.

Useful Links

torch7's People

Contributors

adamlerer avatar andresy avatar apaszke avatar atcold avatar borisfom avatar btnc avatar cdluminate avatar clementfarabet avatar colesbury avatar d11 avatar dominikgrewe avatar fidlej avatar fmassa avatar gchanan avatar georgostrovski avatar hughperkins avatar jokeren avatar jucor avatar killeent avatar koraykv avatar leonbottou avatar nicholas-leonard avatar nkoumchatzky avatar pavanky avatar rguthrie3 avatar sergomezcol avatar soumith avatar timharley avatar tkoeppe avatar zakattacktwitter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch7's Issues

Improving Random number initialization on Unix platforms

Currently torch initializes its random number generation by using the time in seconds. When running jobs in parallel this makes it easy to accidentally have jobs which have the same seed.

I'm opening this issue to propose:
attempting to fopen '/dev/urandom' in THRandom_seed and use that to initialize seed the seed. If this fails fall back to using time (i.e. on non-unix systems).

If this approach seems reasonable I'll work on a patch.

Torch on windows

Hi,

What is roughly needed work to have torch working also on windows ?

deserialization fails on android if nans are present

If a tensor/table is serialized on x86 (ascii or binary) where nans are allowed to be saved and loaded,
and then serialized to ASCII, then loaded on an ARM machine, the loading fails with the error:
read error: read %d blocks instead of %d
where %d can be various numbers.

broadcasting

What do you guys think about adding support for broadcasting?

In terms of implementation, I believe it would be a simple matter of adding intelligent expandAs calls to the beginning of BLAS functions.

CmdLine: parsing boolean options impractical

When parsing a command line, CmdLine interprets given boolean options as the negation of their default value. This is very annoying in the cases where one has fixed shell scripts controlling torch scripts and one fiddles around with default values in torch - the behaviour is then broken. I suggest that the special handling of boolean values in CmdLine:__readOption__() is dropped. In order to remain compatible with the current behaviour, there could be a "legacy" switch in CmdLine:__init() controlling this.

PipeFile should handle SIGPIPE

I found this problem when using torch/gnuplot, but it seems reasonable that torch.PipeFile would allow the caller to handle EPIPE instead of bailing out of luajit due to SIGPIPE.

RNNs with Torch

Hi Soumith, I was wondering if you have any examples for running RNNs in Torch with LSTM?

Manually compilation failed on Red-hat

I want to install Torch locally on REDHAT since I don't have sudo privilege:

cd torch7
mkdir build
cmake ..
make

I got the following errors:

/torch7/lib/luaT/luaT.c: In function โ€˜luaT_lua_newmetatableโ€™:
/torch7/lib/luaT/luaT.c:454: error: โ€˜LUA_GLOBALSINDEXโ€™ undeclared (first use in this function)
/torch7/lib/luaT/luaT.c:454: error: (Each undeclared identifier is reported only once
/torch7/lib/luaT/luaT.c:454: error: for each function it appears in.)
make[2]: *** [lib/luaT/CMakeFiles/luaT.dir/luaT.c.o] Error 1
make[1]: *** [lib/luaT/CMakeFiles/luaT.dir/all] Error 2
make: *** [all] Error 2

I thereby try:

ack LUA_GLOBALSINDEX

got:

init.c
59:  lua_setfield(L, LUA_GLOBALSINDEX, "torch");

lib/luaT/luaT.c
454:    lua_getfield(L, LUA_GLOBALSINDEX, module_name);
456:    lua_pushvalue(L, LUA_GLOBALSINDEX);

utils.c
116:  lua_getfield(L, LUA_GLOBALSINDEX, "torch");

I am just wondering where LUA_GLOBALSINDEX is declared... Any help is appreciated in advance!!

Failed installing lzmq in ubuntu 12.04

Guys,

I am installing torch in my ubuntu 12.04, but failed to lzmq. Are you able to help on this issue? Thanks in advance.
Wenfeng
.....
gcc -shared -o lzmq/timer.so -L/home/wenfeng/torch-distro/install/lib src/ztimer.o src/lzutils.o -lrt
gcc -O2 -fPIC -I/home/wenfeng/torch-distro/install/include -c src/lzmq.c -o src/lzmq.o -DLUAZMQ_USE_SEND_AS_BUF -DLUAZMQ_USE_TEMP_BUFFERS -DLUAZMQ_USE_ERR_TYPE_OBJECT -I/usr/include
src/lzmq.c:482:1: error: โ€˜ZMQ_EVENT_CONNECTEDโ€™ undeclared here (not in a function)
src/lzmq.c:483:1: error: โ€˜ZMQ_EVENT_CONNECT_DELAYEDโ€™ undeclared here (not in a function)
src/lzmq.c:484:1: error: โ€˜ZMQ_EVENT_CONNECT_RETRIEDโ€™ undeclared here (not in a function)
src/lzmq.c:486:1: error: โ€˜ZMQ_EVENT_LISTENINGโ€™ undeclared here (not in a function)
src/lzmq.c:487:1: error: โ€˜ZMQ_EVENT_BIND_FAILEDโ€™ undeclared here (not in a function)
src/lzmq.c:489:1: error: โ€˜ZMQ_EVENT_ACCEPTEDโ€™ undeclared here (not in a function)
src/lzmq.c:490:1: error: โ€˜ZMQ_EVENT_ACCEPT_FAILEDโ€™ undeclared here (not in a function)
src/lzmq.c:492:1: error: โ€˜ZMQ_EVENT_CLOSEDโ€™ undeclared here (not in a function)
src/lzmq.c:493:1: error: โ€˜ZMQ_EVENT_CLOSE_FAILEDโ€™ undeclared here (not in a function)
src/lzmq.c:494:1: error: โ€˜ZMQ_EVENT_DISCONNECTEDโ€™ undeclared here (not in a function)
src/lzmq.c:499:1: error: โ€˜ZMQ_EVENT_ALLโ€™ undeclared here (not in a function)

Error: Failed installing dependency: https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/lzmq-0.4.2-1.src.rock - Build error: Failed compiling object src/lzmq.o
\nWriting new paths to shell config\n

torch.max(dim) not working on arm

i have to debug further, but torch.max(tensor, dim) is returning a nil value as output on arm. the same tensor is passed to the function on x86 and it works.

Segfault on rand functions

This commit introduces a bug on OSX: any call to the rand functions (randn, rand, ...) segfaults. Am I the only one to have this problem?

Installation scatters header files around

I am aware (not yet sure why) that brew-doctor is expected to find

Warning: Unbrewed dylibs were found in /usr/local/lib.                                                                                                                                               [0/679]
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.

Unexpected dylibs:
    /usr/local/lib/libluajit.dylib
    /usr/local/lib/libluaT.dylib
    /usr/local/lib/libqlua.dylib
    /usr/local/lib/libqtlua.dylib
    /usr/local/lib/libTH.dylib

But now we have a new comer

Warning: Unbrewed header files were found in /usr/local/include.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.

Unexpected header files:
    /usr/local/include/lauxlib.h
    /usr/local/include/lua.h
    /usr/local/include/luaconf.h
    /usr/local/include/luajit.h
    /usr/local/include/lualib.h
    /usr/local/include/luaT.h
    /usr/local/include/qtlua/qtluaconf.h
    /usr/local/include/qtlua/qtluaengine.h
    /usr/local/include/qtlua/qtluautils.h
    /usr/local/include/TH/generic/THBlas.h
    /usr/local/include/TH/generic/THLapack.h
    /usr/local/include/TH/generic/THStorage.h
    /usr/local/include/TH/generic/THStorageCopy.h
    /usr/local/include/TH/generic/THTensor.h
    /usr/local/include/TH/generic/THTensorConv.h
    /usr/local/include/TH/generic/THTensorCopy.h
    /usr/local/include/TH/generic/THTensorLapack.h
    /usr/local/include/TH/generic/THTensorMath.h
    /usr/local/include/TH/generic/THTensorRandom.h
    /usr/local/include/TH/TH.h
    /usr/local/include/TH/THAllocator.h
    /usr/local/include/TH/THBlas.h
    /usr/local/include/TH/THDiskFile.h
    /usr/local/include/TH/THFile.h
    /usr/local/include/TH/THFilePrivate.h
    /usr/local/include/TH/THGeneral.h
    /usr/local/include/TH/THGenerateAllTypes.h
    /usr/local/include/TH/THGenerateFloatTypes.h
    /usr/local/include/TH/THGenerateIntTypes.h
    /usr/local/include/TH/THLapack.h
    /usr/local/include/TH/THLogAdd.h
    /usr/local/include/TH/THMemoryFile.h
    /usr/local/include/TH/THRandom.h
    /usr/local/include/TH/THStorage.h
    /usr/local/include/TH/THTensor.h
    /usr/local/include/TH/THTensorApply.h
    /usr/local/include/TH/THTensorDimApply.h
    /usr/local/include/TH/THTensorMacros.h
    /usr/local/include/TH/THVector.h

What is happening?

Undefined variable in C makes the installation fail

Building with gcc-4.9 and g++-4.9 on MacOS 10.9.5

[ 44%] Building C object lib/luaT/CMakeFiles/luaT.dir/luaT.c.o                                                                          
/tmp/luarocks_torch-scm-1-5577/torch7/lib/luaT/luaT.c: In function 'luaT_lua_newmetatable':                                             
/tmp/luarocks_torch-scm-1-5577/torch7/lib/luaT/luaT.c:398:21: error: 'LUA_GLOBALSINDEX' undeclared (first use in this function)         
     lua_getfield(L, LUA_GLOBALSINDEX, module_name);                                                                                    
                     ^                                                                                                                  
/tmp/luarocks_torch-scm-1-5577/torch7/lib/luaT/luaT.c:398:21: note: each undeclared identifier is reported only once for each function i
t appears in                                                                                                                            
make[2]: *** [lib/luaT/CMakeFiles/luaT.dir/luaT.c.o] Error 1                                                                            
make[1]: *** [lib/luaT/CMakeFiles/luaT.dir/all] Error 2                                                                                 
make: *** [all] Error 2                                                                                                                 

Error: Build error: Failed building.                                                                                                    

Code for Multi-Scale Convolutional Net

Hello.
I am trying to simulate what clement farabet and others did in this paper : http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf
(learning hierarchical features for scene labeling).

What I want is to train my system (consisting of copies of a single convnet) on a multi-scale pyramid version of an image. upscale the individual outputs and combine them to form one large set of feature maps, and forward it onto a 2 layer MLP.

Can anyone please point me towards the code that does this ? A short tutorial would be helpful also.

torch memory pool

Hi,

I was wondering if we could build some kind of memory pool singleton which we could pass around torch and cutorch functions (or which they could access via a static variable). But this singleton approach would require that we make it thread-safe...

Another option would be to make a Storage-local memory pool (or Tensor-local?). We add an attribute to the Storage structure which C/Cuda functions can resize and use as an object-local memory pool. We could make its use optional by adding a global function torch.poolMemory().

This would allow us to make all the repeating cudaMalloc and malloc calls occur only once, thereby making the code faster (non-blocking in the case of cutorch), at the cost of a heavier memory footprint. The many THTensor_newContiguous() and THCudaTensor_computesz all require new memory allocations, which are slow and hidden. The THTensor_newContiguous() is a particular nuisance as it is located in so many BLAS wrappers and nn.Modules.

I believe this latter option, the storage-local memory pool, would be quite thread-safe as accessing a single Storage from different threads isn't currently thread-safe anyway (AFAIK).

Local install failing

I am replicating the install-luajit+torch script for my distro (Gentoo), but it seems to want to install in root regardless of using --local.

Not cmake literate so not sure if these lines mean it's an expected behaviour:

TorchConfig.cmake.in:

# This (ugly) setup assumes:
#  CMAKE_PREFIX_PATH = LUA_BINDIR
#  CMAKE_INSTALL_PREFIX = PREFIX

If not, sundown, cwrap and paths install fine, and here is my in/output:

export $(eval ``luarocks path``)
export CMAKE_LIBRARY_PATH=/usr/lib64/gcc/x86_64-pc-linux-gnu/4.7.3/

lua_modules=('sundown' 'cwrap' 'paths' 'torch')

for i in "${lua_modules[@]}"
do
   luarocks --local --server=https://raw.github.com/torch/rocks/master install $i || (echo "Error occurred"; break)
echo 
done
Install the project...
-- Install configuration: "Release"
-- Installing: /usr/share/cmake/torch/TorchExports.cmake
CMake Error at cmake_install.cmake:48 (FILE):
  file INSTALL cannot copy file
  "/tmp/luarocks_torch-scm-1-9869/torch7/build/CMakeFiles/Export/share/cmake/torch/TorchExports.cmake"
  to "/usr/share/cmake/torch/TorchExports.cmake".

Clarification please: what's up with torch.eig() returned values?

Hi guys

I'm confused: why does torch.eig(A), where A is a d-by-d square matrix, returns a d-by-2 tensor of eigenvalues, instead of a vector? Is it for complex eigenvalues?
Example:

t7> = torch.eig(torch.eye(5), 'V')
 1  0
 1  0
 1  0
 1  0
 1  0
[torch.DoubleTensor of dimension 5x2]

 1  0  0  0  0
 0  1  0  0  0
 0  0  1  0  0
 0  0  0  1  0
 0  0  0  0  1
[torch.DoubleTensor of dimension 5x5]

t7>

module 'cwrap' not found

Even after the successful installation of cwrap** and cleaning we cannot successfully manage to compile the project:

$ make
[ 3%] Built target TH
[ 44%] Built target luaT
[ 48%] Generating random.c
lua: random.lua:1: module 'cwrap' not found:
Failed loading module cwrap in LuaRocks rock cwrap scm-1

Any advice?

** cwrap installation successful:
$ luarocks --local --server=https://raw.github.com/torch/rocks/master install cwrap
Installing https://raw.github.com/torch/rocks/master/cwrap-scm-1.rockspec...
Using https://raw.github.com/torch/rocks/master/cwrap-scm-1.rockspec... switching to 'build' mode
Cloning into 'cwrap'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 8 (delta 0), reused 5 (delta 0)
Receiving objects: 100% (8/8), 9.80 KiB | 0 bytes/s, done.
Checking connectivity... done.
Updating manifest for $/.luarocks/lib/luarocks/rocks

cwrap scm-1 is now built and installed in $/.luarocks (license: BSD)

Tests failing

There are a few tests failing in torch.test(). We should probably not merge a commit if some of the tests fail.

`help` command in iTorch does not understand symbols iTorch is procesing

I am following the Torch tutorial here: http://code.madbits.com/wiki/doku.php?id=tutorial_basics

When I try to use the help function in iTorch, I cannot get help on certain functions. For example:

In [17]:
help(itorch.image)

Out[17]:
undocumented symbol 

In [18]:
help(res:view)
[string "help(res:view)..."]:1: function arguments expected near ')'

These functions themselves work correctly (after adding the appropriate preceding code from the tutorial). Am I calling the help command incorrectly or is there just missing documentation?

module 'cunn' not found

when I run the code, require 'cunn' ,it appears,
no field package.preload['cunn']
no file './cunn.lua'
no file '/usr/local/share/luajit-2.0.2/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn/init.lua'
no file './cunn.lua'
no file './cunn/init.lua'
no file './lua/cunn.lua'
no file './lua/cunn/init.lua'
no file '/opt/zbstudio/lualibs/cunn/cunn.lua'
no file '/opt/zbstudio/lualibs/cunn.lua'
no file '/opt/zbstudio/bin/linux/x64/libcunn.so'
no file '/opt/zbstudio/bin/linux/x64/clibs/cunn.so'
no file './cunn.so'
no file '/usr/local/lib/lua/5.1/cunn.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/opt/zbstudio/bin/linux/x64/libcunn.so'
no file '/opt/zbstudio/bin/linux/x64/clibs/cunn.so'
stack traceback:
[C]: in function 'require'
I need your help,thanks

isTensor is a nil value

require 'cutorch'
require 'torch'

a = rand(3,20,20):type('torch.FloatTensor')

a = a:cuda()

=torch.isTensor(a)

Then I got

> = torch.isTensor(a)
[string " torch.isTensor(a)"]:1: attempt to call field 'isTensor' (a nil value)

isTensor has been out-dated?

torch.Tensor() bugged

th> a = torch.DoubleTensor()                                                                          
                                                                      [0.0003s]
th> b = torch.Tensor(a)                                                                               
                                                                      [0.0002s]
th> c = torch.FloatTensor()
                                                                      [0.0003s]
th> d = torch.Tensor(c)
bad argument #1 to '?' (expecting number or Tensor or Storage)
stack traceback:
        [C]: at 0x045cb7d0
        [C]: in function 'Tensor'
        [string "d = torch.Tensor(c)..."]:1: in main chunk
        [C]: in function 'xpcall'
        /usr/local/share/lua/5.1/trepl/init.lua:588: in function </usr/local/share/lua/5.1/trepl/init.
lua:489>
                                                                      [0.0006s]
th> 

nn.Reshape not reshaping leading dimensions of size 1

The docs for Reshape say:

module = Reshape(dimension1, dimension2, ..)

Reshapes an nxpxqx.. Tensor into a dimension1xdimension2x... Tensor, taking the elements column-wise.

However it appears to leave initial dimensions of size 1 unchanged.

>>> nn.Reshape(6):forward(torch.Tensor(1, 2, 3)):size()
1
6
[torch.LongStorage of size 2]

Is this a bug or is this intended behavior?

helptool issue

Perhaps this is a minor issue, but whenever I attempt to get help for the specific function
?torch.Tensor.data
the repl dies and dumps core with no stacktrace
Same behavior with th and qlua -lqttorch -lenv

I tried reinstalling torch to make sure I didn't have a corrupt install, but get the same behavior.

The helptool works for everything else I have tried so far.

getRNGState/setRNGState does not serialise Gaussian state

getRNGState / setRNGState in THRandom does not correctly serialise the state used to generate normally distributed numbers.

See timharley:getRNG_gaussian_bug for a patch to the tests which uncovers the bug.

I am working on a fix.

memory leak in torch.writeObject()

Below is a short program that seems to demonstrate a memory leak in torch.writeObject(). On my system (Mac OSX 10.6.8, 8 GB memory) torch halts after 4096 iterations with the message "luajit: not enough memory". Note that the code explicitly calls the garbage collector after every call to writeObject(), leading to the hypothesis of a memory leak.

 -- bug-writeobject-memoryleak.lua
 -- demonstrate memory leak in torch.writeObject

 require 'torch'

 local function append(outputFile, count)
    local tensor = torch.rand(100000)
    print('appending', count)
    outputFile:writeObject(tensor)
    collectgarbage()
 end

 local outputFile = torch.DiskFile('/tmp/bug-writeoject-memoryleak.serialized', 'w')
 assert(outputFile)

 for i = 1, 1e6 do
    append(outputFile, i)
 end

 print('done')

torch.copy should allow copy from lua table.

Currently, we can initialize a tensor using a lua table. However, when one wishes to copy from a lua table into a tensor A, one either has to initialize another tensor B using the table and then copy B into A. Or one can just manually copy each element of the table into A using lua code. What I would like to do is extend torch.copy to take lua tables as input. I could use the code at https://github.com/torch/torch7/blob/master/generic/Tensor.c#L80 to modify the code at https://github.com/torch/torch7/blob/master/lib/TH/generic/THTensorCopy.c .

If this sounds good to you, I will proceed to implement.

Install: could NOT find Qt4 (found suitable version "4.8.5", minimum required is "4.3.0")

When I run the install scripts, I get the following error. How to fix it?

curl -sk https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
curl -sk https://raw.githubusercontent.com/torch/ezinstall/master/install-luajit+torch | bash

(...)

[100%] Built target cunn
Install the project...
-- Install configuration: "Release"
-- Installing: /usr/local/lib/luarocks/rocks/cunn/scm-1/lib/libcunn.so
-- Set runtime path of "/usr/local/lib/luarocks/rocks/cunn/scm-1/lib/libcunn.so" to "$ORIGIN/../lib:/opt/OpenBLAS/lib:/usr/local/lib:/usr/local/cuda/lib64"
-- Installing: /usr/local/lib/luarocks/rocks/cunn/scm-1/lua/cunn/init.lua
-- Installing: /usr/local/lib/luarocks/rocks/cunn/scm-1/lua/cunn/test.lua
Warning: Directory 'doc' not found
Updating manifest for /usr/local/lib/luarocks/rocks

cunn scm-1 is now built and installed in /usr/local/ (license: BSD)
Installing https://raw.githubusercontent.com/torch/rocks/master/qtlua-scm-1.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/qtlua-scm-1.rockspec... switching to 'build' mode
Cloning into 'qtlua'...
remote: Counting objects: 171, done.
remote: Compressing objects: 100% (161/161), done.
remote: Total 171 (delta 9), reused 132 (delta 1)
Receiving objects: 100% (171/171), 357.84 KiB | 170.00 KiB/s, done.
Resolving deltas: 100% (9/9), done.
Checking connectivity... done.
cmake -E make_directory build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DLUA=/usr/local/bin/luajit -DLUA_BINDIR="/usr/local/bin" -DLUA_INCDIR="/usr/local/include" -DLUA_LIBDIR="/usr/local/lib" -DLUADIR="/usr/local/lib/luarocks/rocks/qtlua/scm-1/lua" -DLIBDIR="/usr/local/lib/luarocks/rocks/qtlua/scm-1/lib" -DCONFDIR="/usr/local/lib/luarocks/rocks/qtlua/scm-1/conf" && make

-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Looking for Q_WS_X11
-- Looking for Q_WS_X11 - found
-- Looking for Q_WS_WIN
-- Looking for Q_WS_WIN - not found
-- Looking for Q_WS_QWS
-- Looking for Q_WS_QWS - not found
-- Looking for Q_WS_MAC
-- Looking for Q_WS_MAC - not found
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find Qt4 (missing: QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE
QT_UIC_EXECUTABLE) (found suitable version "4.8.5", minimum required is
"4.3.0")
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-2.8/Modules/FindQt4.cmake:1393 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
CMakeLists.txt:27 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_qtlua-scm-1-2097/qtlua/build/CMakeFiles/CMakeOutput.log".
See also "/tmp/luarocks_qtlua-scm-1-2097/qtlua/build/CMakeFiles/CMakeError.log".

Error: Build error: Failed building.

CmdLine.log cannot create a directory

The example in the documentation doesn't work, because io.open won't create a folder. One would need to call os.execute('mkdir -p'..params.rundir) before cmd:log (which is quite dangerous), or CmdLine needs luafilesystem to create a folder if necessary.

blas fails for gemm with zero strides

th> torch.randn(1,2) * torch.randn(2,1):expand(2,2)
 ** On entry to DGEMM  parameter number  8 had an illegal value
0 0
[torch.DoubleTensor of dimension 1x2]

thanks to tudor bosman for reporting this

serialization fails for double values when format = 'ascii'

When serializing an object using format == 'ascii', the values of number objects are truncated to 6 significant digits. Thus the loaded value can differ from the saved value.

Example program to illustrate this:

require 'torch'

local x = 913062344
local file = 'test_torch_save_load.data'
local format = 'ascii'

torch.save(file, x, format)
local y = torch.load(file, format)

print(x, y)
assert(x == y)
print('ok')

torch.repeatTensor() doesn't work with non-contiguous Tensor

Let's sat I have a column from an image which I'd like to replicate horizontally. Only the first two row are computed correctly.

> = l
(1,.,.) = 
  0.6902
  0.6863
  0.6863
  0.6510
  0.6392

(2,.,.) = 
  0.6627
  0.6510
  0.6314
  0.5922
  0.5882

(3,.,.) = 
  0.6157
  0.5922
  0.5882
  0.5765
  0.5725
[torch.DoubleTensor of dimension 3x5x1]

> = l:repeatTensor(1,1,2)
(1,.,.) = 
  0.6902  0.6902
  0.6863  0.6863
  0.6627  0.6627
  0.6588  0.6588
  0.6471  0.6471

(2,.,.) = 
  0.6314  0.6314
  0.6157  0.6157
  0.6000  0.6000
  0.5765  0.5765
  0.5686  0.5686

(3,.,.) = 
  0.5569  0.5569
  0.5451  0.5451
  0.5176  0.5176
  0.5176  0.5176
  0.4980  0.4980
[torch.DoubleTensor of dimension 3x5x2]

cmdline parsing bug when parsing l?

I have a problem with parsing cmdine parameters staring with the letter l:

Heres a small script to demonstrate the problem

 print '==> processing options'
 cmd = torch.CmdLine()
 cmd:option('-learningrate', 0.05, 'lr')
 cmd:text()
 opts = cmd:parse(arg or {})

print(opts)

I savde the snippet in cmdline.lua and call:

   > th cmdtest.lua -learningrate 5
could not load earningrate, skipping
==> processing options  

invalid argument: 5 

Usage: [options] 
  -learningrate lr [0.05]

Apparently the learningrate parameters is passed in as earningrate? Changing the cmd parametername form learningrate to eta solves the problem:

th cmdtest.lua -eta 5
==> processing options
{
eta : 5
}
bn09574:Desktop sorensonderby$

I'm using OSX mavericks with a danish keyboard layout

help function doesn't work

I installed torch7 and other modules in mac by following the Cheatsheet. Somehow, the help function doesn't work. (help is nil)
Do you have any solution for this issue?

Invalid instructions for CmdLine doc

To be short, lua myscript.lua won't work as the doc shows.

  1. torch is based on ``LuaJITandLuarocks`, which means that `lua` is not essential to this framework.
  2. CmdLine is a torch utilities, as in torch tool collections, require torch`` is needed at the top of the code.
  3. th myscript.lua gives what you want.

It's a pity. Well, this might be helpful to someone tried to read every torch docs.

expand():isContiguous() is false

Is it normal that the result of an expand() isn't contiguous (i.e. tensors having a stride of zero in one dim)?

Specifically, couldn't cublas deal with these directly without requiring a contiguous copy? Same goes for the cutorch copy kernels.

Because this is forcing me to use repeatTensor instead of expand.

Subtraction does not work

Subtraction as described here: https://github.com/torch/torch7/blob/09eaf1ef629062ca69e4763d0fde26736340aa24/doc/maths.md#addition-and-substraction

(btw, it is spelled subtraction, not substraction)

Does not work as expected:

> x = torch.Tensor(2,2):fill(2)
                                                                      [0.0001s]
> = 3-x
[string "return  3-x..."]:1: internal error in __sub: no metatable
stack traceback:
    [C]: in function '__sub'
    [string "return  3-x..."]:1: in main chunk
    [C]: in function 'xpcall'
    /usr/local/share/lua/5.1/trepl/init.lua:519: in function </usr/local/share/lua/5.1/trepl/init.lua:426>
                                                                      [0.0002s]

instalation problems

i can install with the curl method that results this. i use the instalation instructions from http://torch.ch/

which: no nvcc in
....
sh: line 1: 41164 Trace/BPT trap: 5 'wget' --no-check-certificate --no-cache --user-agent='LuaRocks/2.2.0beta1 macosx-x86_64 via wget' --quiet --timeout=30 --tries=1 --timestamping 'https://raw.githubusercontent.com/torch/rocks/master/manifest' > /dev/null 2> /dev/null
Warning: Failed searching manifest: Failed fetching manifest for https://raw.githubusercontent.com/torch/rocks/master - Failed downloading https://raw.githubusercontent.com/torch/rocks/master/manifest

....
sh: line 1: 41185 Trace/BPT trap: 5 'wget' --no-check-certificate --no-cache --user-agent='LuaRocks/2.2.0beta1 macosx-x86_64 via wget' --quiet --timeout=30 --tries=1 --timestamping 'https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/manifest' > /dev/null 2> /dev/null
Warning: Failed searching manifest: Failed fetching manifest for https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master - Failed downloading https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/manifest

Error: No results matching query were found.
Password:

....
sh: line 1: 41657 Trace/BPT trap: 5 'wget' --no-check-certificate --no-cache --user-agent='LuaRocks/2.2.0beta1 macosx-x86_64 via wget' --quiet --timeout=30 --tries=1 --timestamping 'https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/manifest' > /dev/null 2> /dev/null
Warning: Failed searching manifest: Failed fetching manifest for https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master - Failed downloading https://raw.githubusercontent.com/rocks-moonscript-org/moonrocks-mirror/master/manifest

Error: No results matching query were found.
Error. Exiting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.