官术网_书友最值得收藏!

How to do it...

  1. At the moment, gluon is included in the latest release of MXNet (follow the steps in Building efficient models with MXNet to install MXNet). 
  2. After installing, we can directly import gluon as follows:
from mxnet import gluon
  1. Next, we create some dummy data. For this we need the data to be in MXNet's NDArray or Symbol:
import mxnet as mx
import numpy as np
x_input = mx.nd.empty((1, 5), mx.gpu())
x_input[:] = np.array([[1,2,3,4,5]], np.float32)

y_input = mx.nd.empty((1, 5), mx.gpu())
y_input[:] = np.array([[10, 15, 20, 22.5, 25]], np.float32)
  1. With Gluon, it's really straightforward to build a neural network by stacking layers:
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Dense(16, activation="relu"))
net.add(gluon.nn.Dense(len(y_input)))
  1. Next, we initialize the parameters and we store these on our GPU as follows:
net.collect_params().initialize(mx.init.Normal(), ctx=mx.gpu())
  1. With the following code we set the loss function and the optimizer:
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': .1})
  1. We're ready to start training or model:
n_epochs = 10

for e in range(n_epochs):
for i in range(len(x_input)):
input = x_input[i]
target = y_input[i]
with mx.autograd.record():
output = net(input)
loss = softmax_cross_entropy(output, target)
loss.backward()
trainer.step(input.shape[0])
We've shortly demonstrated how to implement a neural network architecture with Gluon. Gluon is a powerful extension that can be used to implement deep learning architectures with clean code. At the same time, there is almost no performance loss when using Gluon.

主站蜘蛛池模板: 青岛市| 淄博市| 舞阳县| 临夏市| 伽师县| 威宁| 万载县| 渝中区| 泰和县| 靖宇县| 同心县| 青龙| 渭南市| 阳春市| 泰宁县| 周至县| 泉州市| 梁河县| 翼城县| 德格县| 陈巴尔虎旗| 连州市| 岳普湖县| 永新县| 清涧县| 淳安县| 崇左市| 读书| 中山市| 砚山县| 金寨县| 仁寿县| 卢湾区| 赤壁市| 宁南县| 平昌县| 宣武区| 景泰县| 乐都县| 册亨县| 炉霍县|