如何升级tensorflow java版本版本

win10 怎么升级tensorflow_百度知道
win10 怎么升级tensorflow
我有更好的答案
win10的年度更新新增了linux的bash 可以在cmd中输入bash命令开启 通过这个功能就可以让Tensorflow在Win10上跑起来 听说目前bash还没有权限访问显卡,所以没办法使用GPU版的,但我正在用的电脑没有老黄家的显卡就没有安装GPU版的去测试先是在cmd里进入bash 然后sudo apt-get install python-pip python-devsudo apt-get install python-numpy python-scipy12安装所需的其他依赖库接着就是安装tensorflow了 这里不能用pip install tensorflow的方式直接安装 以0.11.0rc0版为例
采纳率:93%
来自团队:
为您推荐:
其他类似问题
您可能关注的内容
换一换
回答问题,赢新手礼包
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。Tensorflow安装环境更新 - shihuc - 博客园
好记性不如烂笔头,还可以分享给别人看看!
posts - 134, comments - 34, trackbacks - 0, articles - 0
本博文是对前面两篇tensorflow的博文的一个继续,对环境的更新。
主要出发点:
上述两篇博文的程序运行的环境,其实是没有用到GPU的。本篇博文,介绍如何利用GPU。
首先通过pip重新安装一个支持gpu的tensorflow,采用upgrade的方式进行。
[root@bogon tensorflow]# pip install --upgrade tensorflow-gpu
Collecting tensorflow-gpu
Downloading tensorflow_gpu-<span style="color: #.0.<span style="color: #-cp27-cp27mu-manylinux1_x86_64.whl (<span style="color: #.8MB)
<span style="color: #0% |████████████████████████████████| <span style="color: #.8MB <span style="color: #.6kB/s
Requirement already up-to-date: protobuf&=<span style="color: #.1.<span style="color: # in /usr/lib64/python2.<span style="color: #/site-packages (from tensorflow-gpu)
Requirement already up-to-date: six&=<span style="color: #.10.<span style="color: # in /usr/lib/python2.<span style="color: #/site-packages (from tensorflow-gpu)
Requirement already up-to-date: wheel in /usr/lib/python2.<span style="color: #/site-packages (from tensorflow-gpu)
Requirement already up-to-date: mock&=<span style="color: #.0.<span style="color: # in /usr/lib/python2.<span style="color: #/site-packages (from tensorflow-gpu)
Requirement already up-to-date: numpy&=<span style="color: #.11.<span style="color: # in /usr/lib64/python2.<span style="color: #/site-packages (from tensorflow-gpu)
Requirement already up-to-date: setuptools in /usr/lib/python2.<span style="color: #/site-packages (from protobuf&=<span style="color: #.1.<span style="color: #-&tensorflow-gpu)
Requirement already up-to-date: funcsigs&=<span style="color: #; python_version & "<span style="color: #.3" in /usr/lib/python2.<span style="color: #/site-packages (from mock&=<span style="color: #.0.<span style="color: #-&tensorflow-gpu)
Requirement already up-to-date: pbr&=<span style="color: #.11 in /usr/lib/python2.<span style="color: #/site-packages (from mock&=<span style="color: #.0.<span style="color: #-&tensorflow-gpu)
Requirement already up-to-date: appdirs&=<span style="color: #.4.<span style="color: # in /usr/lib/python2.<span style="color: #/site-packages (from setuptools-&protobuf&=<span style="color: #.1.<span style="color: #-&tensorflow-gpu)
Requirement already up-to-date: packaging&=<span style="color: #.8 in /usr/lib/python2.<span style="color: #/site-packages (from setuptools-&protobuf&=<span style="color: #.1.<span style="color: #-&tensorflow-gpu)
Requirement already up-to-date: pyparsing in /usr/lib/python2.<span style="color: #/site-packages (from packaging&=<span style="color: #.8-&setuptools-&protobuf&=<span style="color: #.1.<span style="color: #-&tensorflow-gpu)
Installing collected packages: tensorflow-gpu
Successfully installed tensorflow-gpu-<span style="color: #.0.<span style="color: #
这个过程顺利完成。
然后,将MNIST的手写识别程序,在运行一下,验证一下,是否启用GPU。
[root@bogon tensorflow]# python mnist_demo1.py
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcublas.so.<span style="color: #.0 locally
I tensorflow/stream_executor/dso_loader.cc:126] Couldn't open CUDA library libcudnn.so.5. LD_LIBRARY_PATH: /usr/local/cuda-8.0/lib64:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:3517] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcufft.so.<span style="color: #.0 locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcuda.so.<span style="color: # locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcurand.so.<span style="color: #.0 locally
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #5] Found device <span style="color: # with properties:
name: GeForce GTX <span style="color: #80
major: <span style="color: # minor: <span style="color: # memoryClockRate (GHz) <span style="color: #.7335
pciBusID <span style="color: #00:<span style="color: #:<span style="color: #.0
Total memory: <span style="color: #.92GiB
Free memory: <span style="color: #.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #6] DMA: <span style="color: #
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #6] <span style="color: #:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #5] Creating TensorFlow device (/gpu:<span style="color: #) -& (device: <span style="color: #, name: GeForce GTX <span style="color: #80, pci bus id: <span style="color: #00:<span style="color: #:<span style="color: #.0)
F tensorflow/stream_executor/cuda/cuda_dnn.cc:<span style="color: #2] Check failed: s.ok() could not find cudnnCreate in cudnn DSO; dlerror: /usr/lib/python2.<span style="color: #/site-packages/tensorflow/python/_pywrap_tensorflow.so: undefined symbol: cudnnCreate
Aborted (core dumped)
上面红色部分报错了,找不到cudnn的so文件,进入到cuda的安装路径,查看是否有这个so。
[root@bogon lib64]# ll libcudnn
libcudnn.so.<span style="color: #.1
libcudnn.so.<span style="color: #.1.<span style="color: #
libcudnn_static.a
的确没有libcudnn.so.5的文件。
下面,建立一个软连接,将libcudnn.so.5指向libcudnn.so.5.1。
[root@bogon lib64]# ln -s libcudnn.so.5.1 libcudnn.so.5
[root@bogon lib64]# ll libcudnn*
lrwxrwxrwx. <span style="color: # root root
<span style="color: # Mar <span style="color: # <span style="color: #:<span style="color: # libcudnn.so.<span style="color: # -& libcudnn.so.<span style="color: #.1
lrwxrwxrwx. <span style="color: # root root
<span style="color: # Mar <span style="color: # <span style="color: #:<span style="color: # libcudnn.so.<span style="color: #.1 -& libcudnn.so.<span style="color: #.1.<span style="color: #
-rwxr-xr-x. <span style="color: # root root <span style="color: #337624 Mar <span style="color: # <span style="color: #:<span style="color: # libcudnn.so.<span style="color: #.1.<span style="color: #
-rw-r--r--. <span style="color: # root root <span style="color: #756172 Mar <span style="color: # <span style="color: #:<span style="color: # libcudnn_static.a
现在,有了这个libcudnn.so.5的文件了。
再次验证mnist的手写识别程序。
[root@bogon tensorflow]# python mnist_demo1.py
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcublas.so.<span style="color: #.0 locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcudnn.so.<span style="color: # locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcufft.so.<span style="color: #.0 locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcuda.so.<span style="color: # locally
I tensorflow/stream_executor/dso_loader.cc:<span style="color: #5] successfully opened CUDA library libcurand.so.<span style="color: #.0 locally
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #5] Found device <span style="color: # with properties:
name: GeForce GTX <span style="color: #80
major: <span style="color: # minor: <span style="color: # memoryClockRate (GHz) <span style="color: #.7335
pciBusID <span style="color: #00:<span style="color: #:<span style="color: #.0
Total memory: <span style="color: #.92GiB
Free memory: <span style="color: #.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #6] DMA: <span style="color: #
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #6] <span style="color: #:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:<span style="color: #5] Creating TensorFlow device (/gpu:<span style="color: #) -& (device: <span style="color: #, name: GeForce GTX <span style="color: #80, pci bus id: <span style="color: #00:<span style="color: #:<span style="color: #.0)
<span style="color: #.9092
到现在为止,我的tensorflow的运行环境,已经是基于GPU的了。
下面附上测试中的mnist_demo1.py的内容:
#!/usr/bin/env python
# -*- coding: utf-<span style="color: # -*-
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, <span style="color: #4])
y_ = tf.placeholder("float", shape=[None, <span style="color: #])
w = tf.Variable(tf.zeros([<span style="color: #4,<span style="color: #]))
b = tf.Variable(tf.zeros([<span style="color: #]))
init = tf.global_variables_initializer()
sess.run(init)
y = tf.nn.softmax(tf.matmul(x, w) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(<span style="color: #.01).minimize(cross_entropy)
for i in range(<span style="color: #00):
batch = mnist.train.next_batch(<span style="color: #)
train_step.run(feed_dict={x: batch[<span style="color: #], y_: batch[<span style="color: #]})
correct_prediction = tf.equal(tf.argmax(y,<span style="color: #), tf.argmax(y_,<span style="color: #))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
最后说明下,上述WARNING部分:
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:<span style="color: #] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
暂时没有关注,所知道的处理办法,就是用bazel进行源码安装tensorflow可以解决这个问题。由于不是太影响实验,暂且不关注。他的最新文章
他的热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)极客学院团队出品 · 更新于
下载与安装
你可以使用我们提供的 Pip, Docker, Virtualenv, Anaconda 或 源码编译的方法安装 TensorFlow.
是一个 Python 的软件包安装与管理工具.
在安装 TensorFlow 过程中要涉及安装或升级的包详见
首先安装 pip (或 Python3 的 pip3 ):
# Ubuntu/Linux 64-bit
$ sudo apt-get install python-pip python-dev
# Mac OS X
$ sudo easy_install pip
安装 TensorFlow :
# Ubuntu/Linux 64-bit, CPU only, Python 2.7:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
# For other versions, see "Install from sources" below.
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0-py2-none-any.whl
如果是 Python3 :
# Ubuntu/Linux 64-bit, CPU only, Python 3.4:
$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
# For other versions, see "Install from sources" below.
$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl
# Mac OS X, CPU only:
$ sudo easy_install --upgrade six
$ sudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0-py3-none-any.whl
备注:如果之前安装过 TensorFlow & 0.7.1 的版本,应该先使用 pip uninstall 卸载 TensorFlow 和 protobuf ,保证获取的是一个最新 protobuf 依赖下的安装包.
之后可以一下.
基于 Docker 的安装
我们也支持通过
运行 TensorFlow.
该方式的优点是不用操心软件依赖问题.
首先, . 一旦 Docker
已经启动运行, 可以通过命令启动一个容器:
$ docker run -it b.gcr.io/tensorflow/tensorflow
该命令将启动一个已经安装好 TensorFlow 及相关依赖的容器.
默认的 Docker 镜像只包含启动和运行 TensorFlow 所需依赖库的一个最小集. 我们额外提供了
下面的容器, 该容器同样可以通过上述 docker run 命令安装:
b.gcr.io/tensorflow/tensorflow-full: 镜像中的 TensorFlow 是从源代码完整安装的,
包含了编译和运行 TensorFlow 所需的全部工具. 在该镜像上, 可以直接使用源代码进行实验,
而不需要再安装上述的任何依赖.
基于 VirtualEnv 的安装
我们推荐使用
创建一个隔离的容器, 来安装 TensorFlow. 这是可选的, 但是这样做能使排查安装问题变得更容易.
首先, 安装所有必备工具:
# 在 Linux 上:
$ sudo apt-get install python-pip python-dev python-virtualenv
# 在 Mac 上:
$ sudo easy_install pip
# 如果还没有安装 pip
$ sudo pip install --upgrade virtualenv
接下来, 建立一个全新的 virtualenv 环境. 为了将环境建在 ~/tensorflow
目录下, 执行:
$ virtualenv --system-site-packages ~/tensorflow
$ cd ~/tensorflow
然后, 激活 virtualenv:
$ source bin/activate
# 如果使用 bash
$ source bin/activate.csh
# 如果使用 csh
(tensorflow)$
# 终端提示符应该发生变化
在 virtualenv 内, 安装 TensorFlow:
(tensorflow)$ pip install --upgrade &$url_to_binary.whl&
接下来, 使用类似命令运行 TensorFlow 程序:
(tensorflow)$ cd tensorflow/models/image/mnist
(tensorflow)$ python convolutional.py
# 当使用完 TensorFlow
(tensorflow)$ deactivate
# 停用 virtualenv
# 你的命令提示符会恢复原样
基于 Anaconda 的安装
是一个集成许多第三方科学计算库的 Python 科学计算环境,Anaconda 使用 conda 作为自己的包管理工具,同时具有自己的,类似 Virtualenv.
和 Virtualenv 一样,不同 Python 工程需要的依赖包,conda 将他们存储在不同的地方。 TensorFlow 上安装的 Anaconda 不会对之前安装的 Python 包进行覆盖.
安装 Anaconda
建立一个 conda 计算环境
激活环境,使用 conda 安装 TensorFlow
安装成功后,每次使用 TensorFlow 的时候需要激活 conda 环境
安装 Anaconda :
参考 Anaconda 的下载页面的
建立一个 conda 计算环境名字叫tensorflow:
# Python 2.7
$ conda create -n tensorflow python=2.7
# Python 3.4
$ conda create -n tensorflow python=3.4
激活tensorflow环境,然后使用其中的 pip 安装 TensorFlow. 当使用easy_install使用--ignore-installed标记防止错误的产生。
$ source activate tensorflow
(tensorflow)$
# Your prompt should change
# Ubuntu/Linux 64-bit, CPU only, Python 2.7:
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
# For other versions, see "Install from sources" below.
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only:
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
对于 Python 3.x :
$ source activate tensorflow
(tensorflow)$
# Your prompt should change
# Ubuntu/Linux 64-bit, CPU only, Python 3.4:
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
# For other versions, see "Install from sources" below.
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0rc0-cp34-cp34m-linux_x86_64.whl
# Mac OS X, CPU only:
(tensorflow)$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py3-none-any.whl
conda 环境激活后,你可以
当你不用 TensorFlow 的时候,关闭环境:
(tensorflow)$ source deactivate
# Your prompt should change back
再次使用的时候再激活 :-)
$ source activate tensorflow
(tensorflow)$
# Your prompt should change.
# Run Python programs that use TensorFlow.
# When you are done using TensorFlow, deactivate the environment.
(tensorflow)$ source deactivate
尝试你的第一个 TensorFlow 程序
(可选) 启用 GPU 支持
如果你使用 pip 二进制包安装了开启 GPU 支持的 TensorFlow, 你必须确保
系统里安装了正确的 CUDA sdk 和 CUDNN 版本. 请参间
你还需要设置 LD_LIBRARY_PATH 和 CUDA_HOME 环境变量. 可以考虑将下面的命令
添加到 ~/.bash_profile 文件中, 这样每次登陆后自动生效. 注意, 下面的命令
假定 CUDA 安装目录为 /usr/local/cuda:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda
运行 TensorFlow
打开一个 python 终端:
&&& import tensorflow as tf
&&& hello = tf.constant('Hello, TensorFlow!')
&&& sess = tf.Session()
&&& print sess.run(hello)
Hello, TensorFlow!
&&& a = tf.constant(10)
&&& b = tf.constant(32)
&&& print sess.run(a+b)
从源码安装
克隆 TensorFlow 仓库
$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow
--recurse-submodules 参数是必须得, 用于获取 TesorFlow 依赖的 protobuf 库.
Linux 安装
安装 Bazel
安装 Bazel 的依赖.
中下载适合你的操作系统的最新稳定版,
最后按照下面脚本执行:
$ chmod +x PATH_TO_INSTALL.SH
$ ./PATH_TO_INSTALL.SH --user
注意把 PATH_TO_INSTALL.SH 替换为你下载的安装包的文件路径.
将执行路径 output/bazel 添加到 $PATH 环境变量中.
安装其他依赖
# For Python 2.7:
$ sudo apt-get install python-numpy swig python-dev python-wheel
# For Python 3.x:
$ sudo apt-get install python3-numpy swig python3-dev python3-wheel
可选: 安装 CUDA (在 Linux 上开启 GPU 支持)
为了编译并运行能够使用 GPU 的 TensorFlow, 需要先安装 NVIDIA 提供的 Cuda Toolkit 7.0
和 CUDNN 6.5 V2.
TensorFlow 的 GPU 特性只支持 NVidia Compute Capability &= 3.5 的显卡. 被支持的显卡
包括但不限于:
NVidia Titan
NVidia Titan X
NVidia K20
NVidia K40
下载并安装 Cuda Toolkit 7.0
将工具安装到诸如 /usr/local/cuda 之类的路径.
下载并安装 CUDNN Toolkit 6.5
解压并拷贝 CUDNN 文件到 Cuda Toolkit 7.0 安装路径下. 假设 Cuda Toolkit 7.0 安装
在 /usr/local/cuda, 执行以下命令:
tar xvzf cudnn-6.5-linux-x64-v2.tgz
sudo cp cudnn-6.5-linux-x64-v2/cudnn.h /usr/local/cuda/include
sudo cp cudnn-6.5-linux-x64-v2/libcudnn* /usr/local/cuda/lib64
配置 TensorFlow 的 Cuda 选项
从源码树的根路径执行:
$ ./configure
Do you wish to bulid TensorFlow with GPU support? [y/n] y
GPU support will be enabled for TensorFlow
Please specify the location where CUDA 7.0 toolkit is installed. Refer to
README.md for more details. [default is: /usr/local/cuda]: /usr/local/cuda
Please specify the location where CUDNN 6.5 V2 library is installed. Refer to
README.md for more details. [default is: /usr/local/cuda]: /usr/local/cuda
Setting up Cuda include
Setting up Cuda lib64
Setting up Cuda bin
Setting up Cuda nvvm
Configuration finished
这些配置将建立到系统 Cuda 库的符号链接. 每当 Cuda 库的路径发生变更时, 必须重新执行上述
步骤, 否则无法调用 bazel 编译命令.
编译目标程序, 开启 GPU 支持
从源码树的根路径执行:
$ bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
# 大量的输出信息. 这个例子用 GPU 迭代计算一个 2x2 矩阵的主特征值 (major eigenvalue).
# 最后几行输出和下面的信息类似.
005 lambda = 2.000000 x = [0..447214] y = [1..894427]
001 lambda = 2.000000 x = [0..447214] y = [1..894427]
009 lambda = 2.000000 x = [0..447214] y = [1..894427]
注意, GPU 支持需通过编译选项 &--config=cuda& 开启.
尽管可以在同一个源码树下编译开启 Cuda 支持和禁用 Cuda 支持的版本, 我们还是推荐在
在切换这两种不同的编译配置时, 使用 &bazel clean& 清理环境.
在执行 bazel 编译前必须先运行 configure, 否则编译会失败并提示错误信息. 未来,
我们可能考虑将 configure 步骤包含在编译过程中, 以简化整个过程, 前提是 bazel 能够提供新的特性支持这样.
Mac OS X 安装
Mac 和 Linux 需要的软件依赖完全一样, 但是安装过程区别很大. 以下链接用于帮助你
在 Mac OS X 上安装这些依赖:
参见的 Mac OS X 安装指南.
注意: 你需要安装,
而不是 PCRE2.
创建 pip 包并安装
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# .whl 文件的实际名字与你所使用的平台有关
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
训练你的第一个 TensorFlow 神经网络模型
从源代码树的根路径执行:
$ cd tensorflow/models/image/mnist
$ python convolutional.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
Initialized!
Epoch 0.00
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Epoch 0.12
Minibatch loss: 3.285, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
GPU 相关问题
如果在尝试运行一个 TensorFlow 程序时出现以下错误:
ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory
请确认你正确安装了 GPU 支持, 参见 .
在 Linux 上
如果出现错误:
"__add__", "__radd__",
SyntaxError: invalid syntax
解决方案: 确认正在使用的 Python 版本为 Python 2.7.
在 Mac OS X 上
如果出现错误:
import six.moves.copyreg as copyreg
ImportError: No module named copyreg
解决方案: TensorFlow 使用的 protobuf 依赖 six-1.10.0. 但是, Apple 的默认 python 环境
已经安装了 six-1.4.1, 该版本可能很难升级. 这里提供几种方法来解决该问题:
升级全系统的 six:
sudo easy_install -U six
通过 homebrew 安装一个隔离的 python 副本:
brew install python
在 内编译或使用 TensorFlow.
如果出现错误:
&&& import tensorflow as tf
Traceback (most recent call last):
File "&stdin&", line 1, in &module&
File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 4, in &module&
from tensorflow.python import *
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 13, in &module&
from tensorflow.core.framework.graph_pb2 import *
File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 22, in &module&
serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"d\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tb\x06proto3')
TypeError: __init__() got an unexpected keyword argument 'syntax'
这是由于安装了冲突的 protobuf 版本引起的, TensorFlow 需要的是 protobuf 3.0.0. 当前
最好的解决方案是确保没有安装旧版本的 protobuf, 可以使用以下命令重新安装 protobuf 来解决
brew reinstall --devel protobuf

我要回帖

更多关于 查看tensorflow 版本 的文章

 

随机推荐