官术网_书友最值得收藏!

Summary

In this chapter, you took your first steps to solving NLP tasks by understanding the primary underlying platform (TensorFlow) on which we will be implementing our algorithms. First, we discussed the underlying details of TensorFlow architecture. Next, we discussed the essential ingredients of a meaningful TensorFlow client. Then we discussed a general coding practice widely used in TensorFlow known as scoping. Later, we brought all these elements together to implement a neural network to classify an MNIST dataset.

Specifically, we discussed the TensorFlow architecture lining up the explanation with an example TensorFlow client. In the TensorFlow client, we defined the TensorFlow graph. Then, when we created a session, it looked at the graph, created a GraphDef object representing the graph, and sent it to the distributed master. The distributed master looked at the graph, decided which components to use for the relevant computation, and pided it into several subgraphs to make the computations faster. Finally, workers executed subgraphs and returned the result through the session.

Next, we discussed various elements that composes a typical TensorFlow client: inputs, variables, outputs, and operations. Inputs are the data we feed to the algorithm for training and testing purposes. We discussed three different ways of feeding inputs: using placeholders, preloading data and storing data as TensorFlow tensors, and using an input pipeline. Then we discussed TensorFlow variables, how they differ from other tensors, and how to create and initialize them. Following this, we discussed how variables can be used to create intermediate and terminal outputs. Finally, we discussed several available TensorFlow operations, such as mathematical operations, matrix operations, neural-network related operations, and control-flow operations, that will be used later in the book.

Then we discussed how scoping can be used to avoid certain pitfalls when implementing a TensorFlow client. Scoping allows variables to be used with ease, while maintaining the encapsulation of the code.

Finally, we implemented a neural network using all the previously learned concepts. We used a three-layer neural network to classify an MNIST digit dataset.

In the next chapter, we will see how to use the fully connected neural network we implemented in this chapter, for learning the semantic numerical word representation of words.

主站蜘蛛池模板: 武陟县| 濮阳县| 六盘水市| 古丈县| 阿城市| 延庆县| 驻马店市| 桃园市| 辰溪县| 喀喇沁旗| 阳东县| 清原| 太和县| 中西区| 孝昌县| 神木县| 龙游县| 钟山县| 布尔津县| 濮阳县| 措勤县| 彝良县| 龙山县| 东莞市| 建始县| 贡觉县| 荣成市| 湖口县| 镇平县| 宁国市| 泗阳县| 秦皇岛市| 乐东| 武川县| 柘荣县| 浮山县| 临夏县| 洱源县| 随州市| 曲阳县| 南木林县|