官术网_书友最值得收藏!

Introducing popular open source libraries

There are many open-source libraries that allow the creation of deep neural nets in Python, without having to explicitly write the code from scratch. In this book, we'll use three of the most popular: - TensorFlow, Keras, and PyTorch. They all share some common features, as follows:

  • The basic unit for data storage is the tensor. Consider the tensor as a generalization of a matrix to higher dimensions. Mathematically, the definition of a tensor is more complex, but in the context of deep learning libraries, they are multi-dimensional arrays of base values. A tensor is similar to a NumPy array and is made up of the following:
    • A basic data type of tensor elements. These can vary between libraries, but typically include 16-, 32-, and 64-bit float and 8-, 16-, 32-, and 64-bit integers.
    • An arbitrary number of axes (also known as the rank, order, or degree of the tensor). An 0D tensor is just a scalar value, 1D is a vector, 2D is a matrix, and so on. In deep networks, the data is propagated in batches of n samples. This is done for performance reasons, but it also suits the notion of stochastic gradient descent. For example, if the input data is one-dimensional, such as [0, 1], [1, 0], [0, 0], and [1, 1] for XOR values, we'll actually work with a 2D tensor [[0, 1], [1, 0], [0, 0], [1, 1]] to represent all of the samples in a single batch. Alternatively, two-dimensional grayscale images will be represented as a three-dimensional tensor. In the context of deep learning libraries, the first axis of the tensor represents the different samples.
    • A shape that is the size (the number of values) of each axis of the tensor. For example, the XOR tensor from the preceding example will have a shape of (4, 2). A tensor representing a batch of 32 128x128 images will have a shape of (32, 128, 128).
  • Neural networks are represented as a computational graph of operations. The nodes of the graph represent the operations (weighted sum, activation function, and so on). The edges represent the flow of data, which is how the output of one operation serves as an input for the next one. The inputs and outputs of the operations (including the network inputs and outputs) are tensors.
  • All libraries include automatic differentiation. This means, that all you need to do is define the network architecture and activation functions, and the library will automatically figure out all of the derivatives required for training with backpropagation.
  • All libraries use Python.
  • Until now, we've referred to GPUs in general, but in reality, the vast majority of deep learning projects work exclusively with NVIDIA GPUs. This is so because of the better software support NVIDIA provides. These libraries are no exception – to implement GPU operations, they rely on the CUDA toolkit in combination with the cuDNN library. cuDNN is an extension of CUDA, built specifically for deep learning applications. As was previously mentioned in the Applications of deep learning section, you can also run your deep learning experiments in the cloud.

For these libraries, we will quickly describe how to switch between a GPU and a CPU. Much of the code in this book can then be run on a CPU or a GPU, depending on the hardware available to the reader.

At the time of writing, the latest versions of the libraries are the following:

  • TensorFlow 1.12.0
  • PyTorch 1.0
  • Keras 2.2.4

We'll use them throughout the book.

主站蜘蛛池模板: 高碑店市| 遵义县| 临城县| 南部县| 伊宁市| 洛南县| 巴里| 元谋县| 南宫市| 巍山| 宕昌县| 久治县| 金塔县| 防城港市| 正安县| 广丰县| 伊宁市| 锦屏县| 彭州市| 双鸭山市| 灌云县| 班玛县| 海城市| 古浪县| 铜陵市| 庆阳市| 成安县| 洛宁县| 横山县| 铁力市| 泸州市| 玉环县| 祁连县| 蛟河市| 柘城县| 南郑县| 香港 | 招远市| 宜丰县| 华坪县| 东光县|