- Deep Learning with Theano
- Christopher Bourez
- 452字
- 2021-07-15 17:16:58
Memory and variables
It is good practice to always cast float arrays to the theano.config.floatX
type:
- Either at the array creation with
numpy.array(array, dtype=theano.config.floatX)
- Or by casting the array as
array.as_type(theano.config.floatX)
so that when compiling on the GPU, the correct type is used
For example, let's transfer the data manually to the GPU (for which the default context is None), and for that purpose, we need to use float32
values:
>>> theano.config.floatX = 'float32' >>> a = T.matrix() >>> b = a.transfer(None) >>> b.eval({a:numpy.ones((2,2)).astype(theano.config.floatX)}) gpuarray.array([[ 1. 1.] [ 1. 1.]], dtype=float32) >>> theano.printing.debugprint(b) GpuFromHost<None> [id A] '' |<TensorType(float32, matrix)> [id B]
The transfer(device)
functions, such as transfer('cpu')
, enable us to move the data from one device to another one. It is particularly useful when parts of the graph have to be executed on different devices. Otherwise, Theano adds the transfer functions automatically to the GPU in the optimization phase:
>>> a = T.matrix('a') >>> b = a ** 2 >>> sq = theano.function([a],b) >>> theano.printing.debugprint(sq) HostFromGpu(gpuarray) [id A] '' 2 |GpuElemwise{Sqr}[(0, 0)]<gpuarray> [id B] '' 1 |GpuFromHost<None> [id C] '' 0 |a [id D]
Using the transfer function explicitly, Theano removes the transfer back to CPU. Leaving the output tensor on the GPU saves a costly transfer:
>>> b = b.transfer(None) >>> sq = theano.function([a],b) >>> theano.printing.debugprint(sq) GpuElemwise{Sqr}[(0, 0)]<gpuarray> [id A] '' 1 |GpuFromHost<None> [id B] '' 0 |a [id C]
The default context for the CPU is cpu
:
>>> b = a.transfer('cpu') >>> theano.printing.debugprint(b) <TensorType(float32, matrix)> [id A]
A hybrid concept between numerical values and symbolic variables is the shared variables. They can also lead to better performance on the GPU by avoiding transfers. Initializing a shared variable with the scalar zero:
>>> state = shared(0) >>> state <TensorType(int64, scalar)> >>> state.get_value() array(0) >>> state.set_value(1) >>> state.get_value() array(1)
Shared values are designed to be shared between functions. They can also be seen as an internal state. They can be used indifferently from the GPU or the CPU compile code. By default, shared variables are created on the default device (here, cuda
), except for scalar integer values (as is the case in the previous example).
It is possible to specify another context, such as cpu
. In the case of multiple GPU instances, you'll define your contexts in the Python command line, and decide on which context to create the shared variables:
PATH=/usr/local/cuda-8.0-cudnn-5.1/bin:$PATH THEANO_FLAGS="contexts=dev0->cuda0;dev1->cuda1,floatX=float32,gpuarray.preallocate=0.8" python
>>> from theano import theano Using cuDNN version 5110 on context dev0 Preallocating 9151/11439 Mb (0.800000) on cuda0 Mapped name dev0 to device cuda0: Tesla K80 (0000:83:00.0) Using cuDNN version 5110 on context dev1 Preallocating 9151/11439 Mb (0.800000) on cuda1 Mapped name dev1 to device cuda1: Tesla K80 (0000:84:00.0) >>> import theano.tensor as T >>> import numpy >>> theano.shared(numpy.random.random((1024, 1024)).astype('float32'),target='dev1') <GpuArrayType<dev1>(float32, (False, False))>
- PHP動(dòng)態(tài)網(wǎng)站程序設(shè)計(jì)
- Docker技術(shù)入門與實(shí)戰(zhàn)(第3版)
- PostgreSQL技術(shù)內(nèi)幕:事務(wù)處理深度探索
- Java技術(shù)手冊(cè)(原書第7版)
- Internet of Things with Intel Galileo
- Web程序設(shè)計(jì)(第二版)
- Building RESTful Python Web Services
- Machine Learning With Go
- Java程序設(shè)計(jì)與項(xiàng)目案例教程
- C編程技巧:117個(gè)問題解決方案示例
- Instant Zurb Foundation 4
- Android應(yīng)用開發(fā)實(shí)戰(zhàn)(第2版)
- 用Python動(dòng)手學(xué)統(tǒng)計(jì)學(xué)
- Learning Shiny
- Ubuntu Server Cookbook