官术网_书友最值得收藏!

Getting ready

To understand the intuition of performing text analysis, let's consider the Reuters dataset, where each news article is classified into one of the 46 possible topics.

We will adopt the following strategy to perform our analysis:

  • Given that a dataset could contain thousands of unique words, we will shortlist the words that we shall consider.
  •  For this specific exercise, we shall consider the top 10,000 most frequent words.
  • An alternative approach would be to consider the words that cumulatively constitute 80% of all words within a dataset. This ensures that all the rare words are excluded.
  • Once the words are shortlisted, we shall one-hot-encode the article based on the constituent frequent words.
  • Similarly, we shall one-hot-encode the output label.
  • Each input now is a 10,000-dimensional vector, and the output is a 46-dimensional vector:
  • We will divide the dataset into train and test datasets. However, in code, you will notice that we will be using the in-built dataset of reuters in Keras that has built-in function to identify the top n frequent words and split the dataset into train and test datasets.
  • Map the input and output with a hidden layer in between.
  • We will perform softmax at the output layer to obtain the probability of the input belonging to one of the 46 classes.
  • Given that we have multiple possible outputs, we shall employ a categorical cross entropy loss function.
  • We shall compile and fit the model to measure its accuracy on a test dataset.
主站蜘蛛池模板: 苍南县| 措勤县| 香格里拉县| 龙海市| 水城县| 乌拉特中旗| 和田市| 吉安县| 岳阳市| 安远县| 青冈县| 河东区| 屏东县| 凤凰县| 贵德县| 巴东县| 天津市| 义马市| 河北省| 临安市| 枝江市| 辽宁省| 隆回县| 新丰县| 彩票| 新安县| 亚东县| 乐清市| 黑山县| 虎林市| 四会市| 上林县| 六枝特区| 白银市| 深泽县| 玉龙| 新晃| 石阡县| 黎平县| 延庆县| 山丹县|