mind / wheels Goto Github PK
View Code? Open in Web Editor NEWPerformance-optimized wheels for TensorFlow (SSE, AVX, FMA, XLA, MPI)
Performance-optimized wheels for TensorFlow (SSE, AVX, FMA, XLA, MPI)
Is it possible to have a version of TF compiled just as in https://github.com/mind/wheels/releases/tag/tf1.5-gpu-cuda91-nomkl, but on Ubuntu 14.04? My system has libc-2.17.so instead of libc-2.23.so on Ubuntu 16.04. This is making it impossible to even import TF (see error message below).
$python
>>> import tensorflow as tf
...
ImportError: /lib64/libm.so.6: version `GLIBC_2.23' not found
Hi
Could you please add support for AVX512F instruction
Lots of thanks
Would you please check the source
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
在我的系统上:
llibcublas.so -> libcublas.so.9.1
能编译个最新的吗?
我自己编译老是出错
ERROR: /home/cyhighbuyer/tensorflow/tensorflow/contrib/seq2seq/BUILD:51:1: error while parsing .d file: /home/cyhighbuyer/.cache/bazel/_bazel_cyhighbuyer/8f9b4a28fd7da1a64dacddabb5efd73d/execroot/org_tensorflow/bazel-out/local_linux-py3-opt/bin/tensorflow/contrib/seq2seq/_objs/python/ops/_beam_search_ops_gpu/tensorflow/contrib/seq2seq/kernels/beam_search_ops_gpu.cu.pic.d (No such file or directory).
In file included from external/eigen_archive/unsupported/Eigen/CXX11/Tensor:14:0,
from ./third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1,
from ./tensorflow/contrib/seq2seq/kernels/beam_search_ops.h:19,
from tensorflow/contrib/seq2seq/kernels/beam_search_ops_gpu.cu.cc:20:
external/eigen_archive/unsupported/Eigen/CXX11/../../../Eigen/Core:59:34: fatal error: math_functions.hpp: No such file or directory
#include <math_functions.hpp>
Could you provide a build on top of cuda 9.1 without optimization? (No avx2 etc)
:)
Hi
i have python 3.6.3 on ubuntu 16.04 with cuda9 and cudnn 7 and GTX1080TI
i want install tesorflow-1.4.0 cp36 cp36m linux x64 64.whl on it
but when i run pip -....
i get this message
requirement 'tensorflow..' looks a filename, but the file does not exist
exception
tracback most recent call :
file /...pip/basecommand.py'
'install.py'
pip/req/req_set.py
'download.py'
it seems pip can not see the file !!
as said in title, thanks!
Can you please add a build for Tensorflow version 1.4.1 with MKL and CUDA 8?
Thanks
Thank you very much for great wheels.
Recently I upgraded TF 1.3->1.4 GPU on i9 7900x CPU, with no major issues.
(Intel Math Kernel Library had to be installed and numpy upgrade - but OK)
May I ask for some new addition regarding the info message I had
"Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F"
any plans for this?
Thank you once again and best regards / thupalo
when I install tensorflow1.5 (https://github.com/mind/wheels/releases/download/tf1.5-gpu/tensorflow-1.5.0-cp27-cp27mu-linux_x86_64.whl). I installed it sucessfully and import tensorflow. It report error 'ImportError: /lib64/libm.so.6: version `GLIBC_2.23' not found'. Should I update glibc version from 2.17 to 2.23?
The glibc version of this wheel is 2.23. It maybe a bit higher for CentOS users. Is it possible to provide a whl with lower glibc version?
/home/18781a/venv/lib/python2.7/site-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=wildcard-import
---> 24 from tensorflow.python import *
25 # pylint: enable=wildcard-import
26
/home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers
/home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
70 for some common reasons and solutions. Include the entire stack trace
71 above this error message when asking for help.""" % traceback.format_exc()
---> 72 raise ImportError(msg)
73
74 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: /usr/lib64/libm.so.6: version `GLIBC_2.23' not found (required by /home/18781a/venv/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
Failed to load the native TensorFlow runtime.
Hello,
I was trying to install the wheel for python 3.5 and CUDA 9.1 via pip and got the following error:
Downloading https://github.com/mind/wheels/releases/download/tf1.4.1-gpu-cuda91-generic/tensorflow-1.4.1-cp35-cp35m-linux_x86_64.whl (129.6MB)
100% |████████████████████████████████| 129.6MB 505kB/s
Collecting enum34>=1.1.6 (from tensorflow==1.4.1)
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbc90958da0>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/enum34/
Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbc90958cf8>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/enum34/
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbc90958be0>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/enum34/
Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbc909584a8>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/enum34/
Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbc90958d30>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/enum34/
Could not find a version that satisfies the requirement enum34>=1.1.6 (from tensorflow==1.4.1) (from versions: )
No matching distribution found for enum34>=1.1.6 (from tensorflow==1.4.1)
This is a very useful repository! Do you guys have any plans to include XLA support in future builds? I realize that it is still an experimental feature, but it would be quite useful to have a build around that supports it. TF is also configured to disable this by default, so it would not change the overall behavior of the wheels you guys provide.
OS: win 10 64 bit
python 3.6 64 bit
cuda 9.1
I try pip --no-cache-dir install https://github.com/mind/wheels/releases/download/tf1.4.1-gpu-cuda91/tensorflow-1.4.0-cp36-cp36m-linux_x86_64.whl
But It's say "tensorflow-1.4.0-cp36-cp36m-linux_x86_64.whl is not a supported wheel on this platform."
help
when i execute "tensorflow-1.3.1-cp35-cp35m-linux_x86_64.whl" it says "tensorflow-1.3.1-cp35-cp35m-linux_x86_64.whl is not a supported wheel on this platform."
i think that this wheel is built on Ubuntu Platform, can not support the centos Platform.Right? Could you build some wheel to support centos Platform?Thanks a lot!
First, very nice work
I was looking for a variety of pre builds for ever now, so thanks
also, can you add builds of tensorflow serving with CPU optimizations?
Hello there,
firstly, thank you for producing these wheels, they are spectacularly useful. However I have run into a small compatibility issue. Nvidia have shipped cuDNN 7.1 in their cuda docker images, which is incompatible with your current 1.6 release. Running it yields:
E tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN library: 7101 (compatibility version 7100) but source was compiled with 7005 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
If you would be so kind as to do a release built with cuDNN 7.1, that would be most wonderful.
any plan to build a windows version?
Thanks!
I downloaded the "generic" build:
https://github.com/mind/wheels/releases/download/tf1.4.1-gpu-cuda91-generic/tensorflow-1.4.1-cp35-cp35m-linux_x86_64.whl
But I can't use on my AMD FX 8350 because this processor has AVX support, but no AVX2 support. After trying to use Tensorflow, I get obviously:
2018-01-23 01:34:47.760697: F tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use AVX2 instructions, but these aren't available on your machine.
Can you provide the generic version without (AVX/)AVX2 support?
Can you release a python 3.6 version of the latest build with XLA?
@danqing
First of all, thank you so much for this effort, it really helps me out a lot!
I would appreciate it, if you could build this for our ubuntu 16.04 machine:
We run an 'old' GTX Titan Black (basically a K40) with 6GB vRAM.
Hello,
Would you be able to offer any advice on setting up with this library? As your setup is specifically for Intel CPUs.
I have cuda 9 and cuDNN v7.0.
Thanks
Following MKL instruction here does not gives necessary libmklml_intel.so file in the library.
https://github.com/mind/wheels#mkl
Can you provide a CPU version built with MKL?
How about adding a build for OSX?
Not a issue, but a general question. Will having MKL matter at all if the calculations are being done on the GPU?
Would you be interested in providing versions of TensorFlow with symbol tables? That would make it much easier to troubleshoot segfaults.
Optimized build with symbol tables: (almost as fast, lets you query local variables in gdb)
blaze build --cxxopt=-g2 --linkopt=-g2 --strip never -c opt
Fully debuggable build (ie, gives line numbers, but can be 10x slower)
blaze build -c dbg
I installed the wheel for Python 3.6 from here. However, when trying to import tensorflow, I see the error below. Even if I install mkl-dnn by hand from the repo, I get the same error.
$ ipython
Python 3.6.3 |Anaconda, Inc.| (default, Oct 13 2017, 12:02:49)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import tensorflow as tf
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
~/venvs/test/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
~/venvs/test/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: libmklml_intel.so: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-1-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
~/venvs/test/lib/python3.6/site-packages/tensorflow/__init__.py in <module>()
22
23 # pylint: disable=wildcard-import
---> 24 from tensorflow.python import *
25 # pylint: enable=wildcard-import
26
~/venvs/test/lib/python3.6/site-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers
~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
70 for some common reasons and solutions. Include the entire stack trace
71 above this error message when asking for help.""" % traceback.format_exc()
---> 72 raise ImportError(msg)
73
74 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "~/venvs/test/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "~/venvs/test/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "~/venvs/test/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libmklml_intel.so: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Seeming Nvidia grid driver only provide 384.111 against with cuda 9.0 below. Could you add another build for 1.6 version against cuda 9.0, please?
I have no root access so I can't run the sudo
in the MKL install instructions.
Could we include some Tensorflow 2.0 and CUDA 9 Wheels? I think it would be helpful, there are many cases online of people having to compile from source for this.
Could you please update the MacOS wheels? The last one was released nearly 18 months ago, and it has Tensorflow 1.4 (!):
https://github.com/mind/wheels/releases/tag/tf1.4-cpu-mac
Thanks & regards
please add tensorflow_cpu 1.4.0 with SSE4.1, SSE4.2, AVX, but without AVX2, Thank you.
tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use AVX2 instructions, but these aren't available on your machine.
Thanks so much for your work, it's awesome!
Will the minimum required Cuda capability be set to 3.5 when building? Or where can i configure it?
Thanks so much. ^o^
Best Regards.
What flags are actually given to the compiler (-march=...
), i.e. what parameters is bazel
called with? (--copt=-mavx2
, --copt=-mfma
etc.)
And is g++
or clang++
used?
I haven't seen any activity in the repo from its maintainers in some time. Is this still maintained?
Did you use AVX2 flag when build gpu version with python3.6 ?
Where did you gedit the release file code , I want to use tensorflow 1.1 under the CUDA 9.1 ?
Could you please release version 1.9, especially tf1.9-cpu?
Thank you!
I am trying to install tensorflow-gpu on servers(CUDA9) where I have no access to sudo
.
Hi,
thanks for you works build the tensorflow with newest cuda.
can you build the tensorflow 1.8 with cuda 9.2? and publish the wheel
I met a problem.
ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory
I have already add the environment variables.
export CUDA_HOME=/usr/local/cuda
export PATH=$PATH:$CUDA_HOME/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64
I run the command:
ls -l /usr/local/cuda/lib64/libcudn*
I get the following result:
-rwxr-xr-x 1 root root 282621088 1月 18 11:27 /usr/local/cuda/lib64/libcudnn.so
-rwxr-xr-x 1 root root 282621088 1月 18 11:27 /usr/local/cuda/lib64/libcudnn.so.7
-rwxr-xr-x 1 root root 282621088 1月 18 11:27 /usr/local/cuda/lib64/libcudnn.so.7.0.5
-rw-r--r-- 1 root root 277149668 1月 18 11:27 /usr/local/cuda/lib64/libcudnn_static.a
I think the output above shows that the cuda and cudnn are already installed in my computer.
since tensorflow from pypi are now built with avx2 by default, can you provide a cuda generic version from 1.6 to 1.8?
Hi everyone I have just built a wheel for llinux with cuda 9.1 cudnn 7.1, nccl2.1 and python 3.6.
You can find it in the following link!
Maybe you wanna add it to the repository with the rest of the community wheels?
Cheers!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.