官术网_书友最值得收藏!

How to do it...

  1. At the moment, gluon is included in the latest release of MXNet (follow the steps in Building efficient models with MXNet to install MXNet). 
  2. After installing, we can directly import gluon as follows:
from mxnet import gluon
  1. Next, we create some dummy data. For this we need the data to be in MXNet's NDArray or Symbol:
import mxnet as mx
import numpy as np
x_input = mx.nd.empty((1, 5), mx.gpu())
x_input[:] = np.array([[1,2,3,4,5]], np.float32)

y_input = mx.nd.empty((1, 5), mx.gpu())
y_input[:] = np.array([[10, 15, 20, 22.5, 25]], np.float32)
  1. With Gluon, it's really straightforward to build a neural network by stacking layers:
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Dense(16, activation="relu"))
net.add(gluon.nn.Dense(len(y_input)))
  1. Next, we initialize the parameters and we store these on our GPU as follows:
net.collect_params().initialize(mx.init.Normal(), ctx=mx.gpu())
  1. With the following code we set the loss function and the optimizer:
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': .1})
  1. We're ready to start training or model:
n_epochs = 10

for e in range(n_epochs):
for i in range(len(x_input)):
input = x_input[i]
target = y_input[i]
with mx.autograd.record():
output = net(input)
loss = softmax_cross_entropy(output, target)
loss.backward()
trainer.step(input.shape[0])
We've shortly demonstrated how to implement a neural network architecture with Gluon. Gluon is a powerful extension that can be used to implement deep learning architectures with clean code. At the same time, there is almost no performance loss when using Gluon.

主站蜘蛛池模板: 策勒县| 泽普县| 交口县| 读书| 中江县| 玉门市| 巴彦淖尔市| 千阳县| 松原市| 枝江市| 汶川县| 铁岭县| 茌平县| 获嘉县| 永州市| 楚雄市| 历史| 涞源县| 芮城县| 沿河| 布尔津县| 内丘县| 定西市| 茂名市| 合肥市| 腾冲县| 镇康县| 烟台市| 淮安市| 柏乡县| 察雅县| 汤原县| 萨嘎县| 利辛县| 九龙城区| 黄梅县| 行唐县| 安岳县| 鄂伦春自治旗| 甘谷县| 林芝县|