- Deep Learning Quick Reference
- Mike Bernico
- 535字
- 2021-06-24 18:40:06
Installing Nvidia CUDA Toolkit and cuDNN
Since you'll likely be using a cloud based solution for your deep learning work, I've included instructions that will get you up and running fast on Ubuntu Linux, which is commonly available across cloud providers. It's also possible to install TensorFlow and Keras on Windows. As of TensorFlow v1.2, TensorFlow unfortunately does not support GPUs on OS X.
Before we can utilize the GPU, the NVidia CUDA Toolkit and cuDNN must be installed. We will be installing CUDA Toolkit 8.0 and cuDNN v6.0, which are recommended for use with TensorFlow v1.4. There is a good chance that a new version will be released before you finish reading this paragraph, so check www.tensorflow.org for the latest required versions.
We will start by installing the build-essential package on Ubuntu, which contains most of what we need to compile C++ programs. The code is given here:
sudo apt-get update
sudo apt-get install build-essential
Next, we can download and install CUDA Toolkit. As previously mentioned, we will be installing version 8.0 and it's associated patch. You can find the CUDA Toolkit that is right for you at https://developer.nvidia.com/cuda-zone.
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run
sudo sh cuda_8.0.61_375.26_linux-run # Accept the EULA and choose defaults
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/patches/2/cuda_8.0.61.2_linux-run
sudo sh cuda_8.0.61.2_linux-run # Accept the EULA and choose defaults
The CUDA Toolkit should now be installed in the following path: /usr/local/cuda. You'll need to add a few environment variables so that TensorFlow can find it. You should probably consider adding these environment variables to ~/.bash_profile, so that they're set at every login, as shown in the following code:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME="/usr/local/cuda"
At this point, you can test that everything is working by executing the following command: nvidia-smi. The output should look similar to this:
$nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26 Driver Version: 375.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |
| N/A 41C P0 57W / 149W | 0MiB / 11439MiB | 99% Default |
+-------------------------------+----------------------+----------------------+
Lastly, we need to install cuDNN, which is the NVIDIA CUDA Deep Neural Network library.
First, download cuDNN to your local computer. To do so, you will need to register as a developer in the NVIDIA Developer Network. You can find cuDNN at the cuDNN homepage at https://developer.nvidia.com/cuDNN. Once you have downloaded it to your local computer, you can use scp to move it to your EC2 instance. While exact instructions will vary by cloud provider you can find additional information about connecting to AWS EC2 via SSH/SCP at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html.
Once you've moved cuDNN to your EC2 image, you can unpack the file, using the following code:
tar -xzvf cudnn-8.0-linux-x64-v6.0.tgz
Finally, copy the unpacked files to their appropriate locations, using the following code:
sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
sudo cp cuda/lib64/* /usr/local/cuda/lib64
- 繪制進程圖:可視化D++語言(第1冊)
- 機器學習及應用(在線實驗+在線自測)
- Practical Data Wrangling
- 計算機控制技術
- PIC單片機C語言非常入門與視頻演練
- Visual C# 2008開發技術詳解
- 最簡數據挖掘
- JBoss ESB Beginner’s Guide
- INSTANT Autodesk Revit 2013 Customization with .NET How-to
- 工業機器人集成應用
- 西門子S7-1200/1500 PLC從入門到精通
- 計算機辦公應用培訓教程
- Machine Learning in Java
- 菜鳥起飛五筆打字高手
- 開放自動化系統應用與實戰:基于標準建模語言IEC 61499