juanjosegarciaripoll / tensor Goto Github PK
View Code? Open in Web Editor NEWC++ library for numerical arrays and tensor objects and operations with them, designed to allow Matlab-style programming.
License: Other
C++ library for numerical arrays and tensor objects and operations with them, designed to allow Matlab-style programming.
License: Other
A use-case that I just came across:
I have a 3D-tensor. Now I want to take a slice of this tensor, but depending on some variable either along the first or second dimension. Right now, I have to treat both cases explicitly, but it would be cool if I had the freedom to say "give me some slice along the i-th dimension".
The following code should produce no warning or failure:
CTensor x(1, 4);
CTensor y = reshape(x, 3, 1);
except that y still has size 4 and all that. This was an ugly surprise just a few minutes ago. Though this is obviously a coding bug, it should be caught.
I have installed packages "openblas" and "lapack" through vcpkg, then tried to build the tensor library on top of it, using the vcpkg install dir as CMAKE_PREFIX_PATH.
This fails to find lapack. Upon deeper investigation, it turns out that Lapack is actually found, but the lapack config file introduces a new CMake target "lapack", while the tensor library expects a CMake target called LAPACK::LAPACK (CMakeLists.txt, line 116). This fails the check for the import target's existence (cmake/TensorDependencies.cmake line 54), and the CMake step aborts.
I guess there is a reason why the import target is called such, and namespacing targets is generally a good idea, but this somehow does not match the LAPACK libraries that I have here.
I can of course work around by explicitly specifying TENSOR_lapack_CXXFLAGS and TENSOR_lapack_LDFLAGS, but this is somewhat inconvenient.
As the title suggests, it is not possible to directly reshape a tensor slice / view, that is,
CTensor x = CTensor::random(2, 3, 4);
CTensor y = reshape( x(_, 3, 4), 12);
fails to compile.
The workaround consists of assigning the slice to a new tensor, then reshaping this tensor. This workaround should not even carry significant performance overhead.
Just a comment to not forget it: There was recently a bug in trace() with incorrect calculation of indexes. This was not captured by the unit tests, because it only happened for unsymmetric tensor shapes, while the tests use only equal sizes for all dimensions.
So suggestion: When iterating over tensors of different ranks and sizes in the tests, use unsymmetric (random?) tensor shapes.
While translating Wavepacket to the cmake branch of the tensor library, I noticed a behavior change.
Previously, tensor::scprod(a, b) would calculate b^* \cdot a ("mathematician's" scalar product convention. Now, it is the other way around ("physicist's" convention), where the complex conjugate is taken from a.
As a physicist with backwards-compatibility concerns, I have no strong opinion either way. But it should be either reverted or marked as a breaking change.
Just tried something along these lines:
RTensor matrix = ...
RTensor couplings = take_diag(matrix, 1);
RTensor frequencies = take_diag(matrix, 0, 1); // skip the first frequency
As soon as I add the last line, I get a memory corruption when exiting the context (i.e., when the matrices are deleted). This suggests some issue with take_diag() if you supply the start/end point.
Just tried to compile tensor on a fresh computer and came across a few problems that I did not remember/worked around on my standard machine.
Basically, I cannot build the tests out of the box. The issue is that in test/Makefile.am, @GTEST_DIR@ is replaced by the googletest directory as seen from the top build directory, not the test directory. So essentially, it needs to get ${top_builddir} prepended.
If you write something like the following:
RTensor x = RTensor::random(5);
RTensor y = 2*x;
You get a compile error. The underlying reason is that the multiplication operator is only defined for floating-point values.
This is not a major issue, you could write something like "2.0 * x" to circumvent the problem, but somewhat surprising, and, thanks to the indecipherable nature of C++ error messages, moderately time-intensive to figure out.
Since the fft functions spends for a typical FFT around 1/3 of the time setting up and tearing down the plan, it might be useful to expose the planning routines of fftw to recycle plans.
This also requires some studying how transferable plans are and thinking about possible catches (invalidating plans).
In src/tensor/tensor_to_complex.cpp, line 40, the last iterator is not incremented properly ("+ii" instead of "++ii"), which causes the reconstructed complex tensor to be wrong.
Effectively, if you supply the two RTensors, you get a complex tensors with the correct real values, but whose imaginary values are all the first entry of the second tensors.
There is nothing that you can do with the trace over a 1D tensor, so again one of this coding bugs on my side.
However, tensor::trace makes this very easy by not complaining in such a case, but instead returning utter garbage (approx 1e200). That could be fixed.
This is the sequence i used to install your lib:
git clone https://github.com/juanjosegarciaripoll/tensor.git
cd tensor/
./autogen.sh
./configure --prefix=/home/me/Documents/dir1/dir2/lib/tensor/ LIBS="-llapack -lcblas -latlas" CXXFLAGS="" (this is the folder, where i git cloned it to, this step needs much more explanation)
make
make check
make doxygen-doc
make install:
make install
Making install in src
make[1]: Entering directory '/home/me/Documents/dir1/dir2/lib/tensor/src'
make[2]: Entering directory '/home/me/Documents/dir1/dir2/lib/tensor/src'
/bin/mkdir -p '/home/me/Documents/dir1/dir2/lib/tensor/lib'
/bin/bash ../libtool --mode=install /usr/bin/install -c libtensor.la '/home/me/Documents/dir1/dir2/lib/tensor/lib'
libtool: install: /usr/bin/install -c .libs/libtensor.so.0.0.0 /home/me/Documents/dir1/dir2/lib/tensor/lib/libtensor.so.0.0.0
libtool: install: (cd /home/me/Documents/dir1/dir2/lib/tensor/lib && { ln -s -f libtensor.so.0.0.0 libtensor.so.0 || { rm -f libtensor.so.0 && ln -s libtensor.so.0.0.0 libtensor.so.0; }; })
libtool: install: (cd /home/me/Documents/dir1/dir2/lib/tensor/lib && { ln -s -f libtensor.so.0.0.0 libtensor.so || { rm -f libtensor.so && ln -s libtensor.so.0.0.0 libtensor.so; }; })
libtool: install: /usr/bin/install -c .libs/libtensor.lai /home/me/Documents/dir1/dir2/lib/tensor/lib/libtensor.la
libtool: install: /usr/bin/install -c .libs/libtensor.a /home/me/Documents/dir1/dir2/lib/tensor/lib/libtensor.a
libtool: install: chmod 644 /home/me/Documents/dir1/dir2/lib/tensor/lib/libtensor.a
libtool: install: ranlib /home/me/Documents/dir1/dir2/lib/tensor/lib/libtensor.a
libtool: finish: PATH="/home/me/anaconda3/condabin:/home/me/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.3.0/environment-modules-3.2.10-2hbrcbm6rhirdz32t3o7b273v2ddpfo7/Modules/bin:/home/me/spack/bin:/home/me/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/sbin" ldconfig -n /home/me/Documents/dir1/dir2/lib/tensor/lib
----------------------------------------------------------------------
Libraries have been installed in:
/home/me/Documents/dir1/dir2/lib/tensor/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the '-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the 'LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the 'LD_RUN_PATH' environment variable
during linking
- use the '-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to '/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
make[2]: Nothing to be done for 'install-data-am'.
make[2]: Leaving directory '/home/me/Documents/dir1/dir2/lib/tensor/src'
make[1]: Leaving directory '/home/me/Documents/dir1/dir2/lib/tensor/src'
Making install in include
make[1]: Entering directory '/home/me/Documents/dir1/dir2/lib/tensor/include'
make[2]: Entering directory '/home/me/Documents/dir1/dir2/lib/tensor/include'
make[2]: Nothing to be done for 'install-exec-am'.
/bin/mkdir -p '/home/me/Documents/dir1/dir2/lib/tensor/include'
/bin/mkdir -p '/home/me/Documents/dir1/dir2/lib/tensor/include/../include/tensor'
/usr/bin/install -c -m 644 ../include/tensor/config.h '/home/me/Documents/dir1/dir2/lib/tensor/include/../include/tensor'
/usr/bin/install: '../include/tensor/config.h' and '/home/me/Documents/dir1/dir2/lib/tensor/include/../include/tensor/config.h' are the same file
Makefile:402: recipe for target 'install-nobase_includeHEADERS' failed
make[2]: *** [install-nobase_includeHEADERS] Error 1
make[2]: Leaving directory '/home/me/Documents/dir1/dir2/lib/tensor/include'
Makefile:521: recipe for target 'install-am' failed
make[1]: *** [install-am] Error 2
make[1]: Leaving directory '/home/me/Documents/dir1/dir2/lib/tensor/include'
Makefile:506: recipe for target 'install-recursive' failed
make: *** [install-recursive] Error 1
what am i doing wrong?
I just noted a smaller issue in the CMake branch of the tensor library.
The function reshape(Tensor<>, Dimension) is defined inside the Tensor class with an inline friend definition. According to the C++ specification, such a construct creates a free function that is a friend of the Tensor class. At least g++ creates this friend function outside of the tensor namespace; I have not checked the small print if this is standard behavior or a defect, but it is definitely an issue.
In principle, this should rarely be a problem: Most people should be "using namespace tensor" . However, it is surprising behavior that tensor::reshape() does not resolve to an existing function anymore.
I have a principally simple problem.
A program reads a tensor from sn sdf file (tensor/sdf.h), and needs a certain element from this tensor, the indices of which are specified on the command line. However, the rank of the tensor may vary. In principle, the generic way to solve this would then be to set up a tensor::Indices object, fill it with the indices and call, e.g., RTensor::operator()(const Indices&).
The problem that I have now is that this function does not exist. To query a tensor element, I therefore need to give the index numbers one by one to specific function, depending on the rank of the tensor. This is an annoying solution and at odds with all other uses of tensor::Indices (e.g., all the tensor creation functions allow tensor::Indices for generalized setup), so maybe the operator() should be added properly.
I get:
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from RSparseIndexTest
[ RUN ] RSparseIndexTest.RSparseEmptyConstructor
lt-test_sparse_indices: ../include/tensor/detail/common.h:29: tensor::index tensor::normalize_index(tensor::index, tensor::index): Assertion `(i < dimension) && (i >= 0)' failed.
;;; ECL C Backtrace
;;; /home/ulf/src/tensor/src/.libs/libtensor.so.0(+0x315c3) [0xb76f55c3]
;;; /home/ulf/src/tensor/src/.libs/libtensor.so.0(+0x3169d) [0xb76f569d]
;;; [0xb77a7400]
;;; [0xb77a7424]
;;; /lib/i386-linux-gnu/i686/cmov/libc.so.6(gsignal+0x51) [0xb67d6941]
;;; /lib/i386-linux-gnu/i686/cmov/libc.so.6(abort+0x182) [0xb67d9d72]
;;; /lib/i386-linux-gnu/i686/cmov/libc.so.6(__assert_fail+0xf8) [0xb67cfb58]
[...]
I'm facing trouble in Build and Install part the 2nd step. I gave the command ./configure --prefix=$HOME LIBS="-llapack -lcblas -latlas" CXXFLAGS="src/Makefile.inc"
and got the following output
checking for gcc... gcc
checking whether the C compiler works... no
configure: error: in `/home/rmulay17/Downloads/tensor-master':
configure: error: C compiler cannot create executables
See `config.log' for more details
I don't know if this command is correct or wrong because i don't know what to include in CXXFLAGS="..." so, someone please guide me how to use ./configure --prefix=$HOME LIBS="..." CXXFLAGS="..."
I have already installed BLAS and LAPACK. GCC Compiler is also installed and also tired after installing libc6-dev.
The problem is not so much that tensor is no longer installable on my work computer out of the box using MKL (that too), but the output messages are pretty much pointless. Without digging into the m4 code, it is impossible to make sense of why all mkl files are found, but not the library.
So in the very least, a few more error messages would be useful.
The following does not work right now:
CTensor psi = CTensor::random(5, 4, 3);
CTensor slice = psi(0, range(), 2);
because this calls the function that takes only ranges, and mixing of ranges and slices is not permittes. Instead, you have to put
CTensor slice = psi(range(0,0), range(), range(2,2));
which is ugly as hell.
The solution is rather trivial: The ranges need an implicit constructor that takes a single integer.
Just looking at the serialization code in sdf.h, I spot a few problems:
Again a rather small issue with two possible fixes. I happened to fall into it while trying to compile the tensor library on an older Debian with exactly CMake 3.18.x
In TensorOptions.cmake, there is a generator exprssion like $CONFIG:Release,RelWithDebInfo.
According to the CMake documentation, this expression is only allowed from CMake 3.19 onwards. CMake 3.18 only allows a single configuration to be specified.
There are principally two solutions. None is perfect IMO.
While it is not something critical (the code is easy to work around), some function "pow" or so for elementwise power taking would be pretty useful occasionally. That is, in Matlab style
pow(mytensor, power) = mytensor .^ power
My specific problem: I have a two-dimensional wave function / tensor psi(x,y), and I want to know the weight of certain harmonic oscillator states phi_n(x) and phi_m(y). Doing the math, this involves partial summation over, e.g., only the x-degree of freedom.
Hence, a function that sums a tensor only over some indices would be very handy. Right now, I work around by using the corresponding mean function, but this is ugly.
At least two people (me included) have problems running ./configure just like that. It does not find the existing fortran compiler (gfortran), looking for g77 instead.
My basic issue that I just come across: I want to check if a tensor t has a zero dimension. This is possible like this:
Boolean zeroDims = (t.dimensions() == 0);
std::count(zeroDims.begin(), zeroDims.end(), true);
However, it is a bit cumbersome. What I would like to have is a shortcut function similar to Matlab's any() (and for consistency also all()), i.e., so that I can write
any(t.dimensions == 0);
Update: One could add a function any_of(Boolean) that returns true if any entry is true, and possibly further operations for completeness (all_of, none_of, ...). This would then allow to remove the all_equal functions for tensors and possibly indices, because we can just type all_of( tensorA == tensorB).
If a dependency cannot be found for whatever reason, the current CMake code allows the explicit definition of the compiler / linker flags for a dependency XYZ through the variables TENSOR_XYZ_CXXFLAGS and TENSOR_XYZ_LDFLAGS. Unfortunately, this scheme does not work for two different reasons:
I would have created a pull request, but I am not entirely sure how to solve this issue. One could create an IMPORTED target, which, however, needs an import location (static or shared library), which makes the definition of these custom dependencies more complex. Alternatively, one could just append the flags to the tensor_options target. This has the drawback that it spreads knowledge of this special target throughout the code base, and the code must be able to work with a non-existing dependency target.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.