官术网_书友最值得收藏!

Creating a TensorBoard callback

I've started us off in this chapter by copying our networks and data from Chapter 2, Using Deep Learning to Solve Regression Problems. We're going to make a few simple additions to add our TensorBoard callback. Let's start by modifying the mlp we built first. 

First, we need to import the TensorBoard callback class, using the following code:

from keras.callbacks import TensorBoard

Then we will initiate the callback. I like to do this inside a function that creates all my callbacks, to keep things carefully crafted and tidy. The create_callbacks() function below will return a list of all the callbacks we will pass to .fit(). In this case, it returns a list with one element:

def create_callbacks():
tensorboard_callback = TensorBoard(log_dir='~/ch3_tb_log/mlp',
histogram_freq=1, batch_size=32, write_graph=True,
write_grads=False)
return [tensorboard_callback]

Before we move on, let's cover some of the arguments we're using here:

  • log_dir: This is the path we will write the log files for TensorBoard.
You might have noticed that I'm writing logs for the MLP network's TensorBoard callback to ~/ch_3_tb_log/mlp, which creates a new director  mlp under the directory that we specified for TensorBoard. This is intentional. We will configure the deep neural network model we trained in Chapter 2, Using Deep Learning to Solve Regression Problems, to log to a separate directory,  ~/ch_3_tb_log/dnn. Doing so will allow us to compare both model runs against each other. 
  • histogram_freq: This specifies how often we will compute histograms for activations and weights (in epochs). It defaults to 0, which makes the log much smaller but doesn't generate histograms. We will cover why and when you'll be interested in histograms shortly.
  • batch_size: This is the batch size used to calculate histograms. It defaults to 32.
  • write_graph: This function is Boolean. This will tell TensorBoard to visualize the network graph. This can be quite handy, but it can also make the logs quite large.
  • write_grads: This function is also Boolean. This will tell TensorBoard to calculate histograms of gradients as well.
Because TensorFlow automatically calculates gradients for you, this is rarely used. However, if you were to use custom activations or costs, it could be an excellent troubleshooting tool.

The TensorBoard callback can take additional arguments used for neural networks operating on images, or by using embedded layers. We will cover both later in the book. If you're interested in these features, please see the TensorBoard API doc at https://keras.io/callbacks/#tensorboard.

Now we just need to create our list of callbacks and fit our mlp with the callbacks argument. That will look like this:

callbacks = create_callbacks()
model.fit(x=data["train_X"], y=data["train_y"], batch_size=32,
epochs=200, verbose=1, validation_data=(data["val_X"],
data["val_y"]), callbacks=callbacks)

I've bolded the new argument for clarity.

Before we move on to using TensorBoard, I will instrument the deep neural network the same way I instrumented the mlp. The only change in code will be the directory we write TensorBoard logs to. The method for implementing the same is given below, for your reference:

def create_callbacks():
tensorboard_callback = TensorBoard(log_dir='./ch3_tb_log/dnn',
histogram_freq=1, batch_size=32, write_graph=True, write_grads=False)
return [tensorboard_callback]

The rest of the code will be the same. Now, let's train each network again and take a look at TensorBoard.

主站蜘蛛池模板: 涞水县| 晋中市| 大足县| 馆陶县| 富平县| 登封市| 永新县| 措勤县| 绥滨县| 屯昌县| 长泰县| 九寨沟县| 永嘉县| 铜川市| 四平市| 通化市| 邓州市| 昭觉县| 正定县| 东乌珠穆沁旗| 台湾省| 长海县| 丽水市| 文成县| 广昌县| 济源市| 武隆县| 桑日县| 仪陇县| 孝感市| 西乌珠穆沁旗| 松阳县| 应用必备| 南开区| 比如县| 昌都县| 喜德县| 枣庄市| 将乐县| 互助| 张掖市|