官术网_书友最值得收藏!

Getting ready

To understand the intuition of performing text analysis, let's consider the Reuters dataset, where each news article is classified into one of the 46 possible topics.

We will adopt the following strategy to perform our analysis:

  • Given that a dataset could contain thousands of unique words, we will shortlist the words that we shall consider.
  •  For this specific exercise, we shall consider the top 10,000 most frequent words.
  • An alternative approach would be to consider the words that cumulatively constitute 80% of all words within a dataset. This ensures that all the rare words are excluded.
  • Once the words are shortlisted, we shall one-hot-encode the article based on the constituent frequent words.
  • Similarly, we shall one-hot-encode the output label.
  • Each input now is a 10,000-dimensional vector, and the output is a 46-dimensional vector:
  • We will divide the dataset into train and test datasets. However, in code, you will notice that we will be using the in-built dataset of reuters in Keras that has built-in function to identify the top n frequent words and split the dataset into train and test datasets.
  • Map the input and output with a hidden layer in between.
  • We will perform softmax at the output layer to obtain the probability of the input belonging to one of the 46 classes.
  • Given that we have multiple possible outputs, we shall employ a categorical cross entropy loss function.
  • We shall compile and fit the model to measure its accuracy on a test dataset.
主站蜘蛛池模板: 吴桥县| 安新县| 阳高县| 石柱| 乌拉特前旗| 河北区| 林周县| 道孚县| 镇原县| 陇西县| 昌乐县| 雅江县| 许昌市| 鄂托克旗| 普安县| 永丰县| 宝丰县| 张家界市| 浦县| 孝昌县| 剑阁县| 南陵县| 阳泉市| 福安市| 阿图什市| 遂川县| 林州市| 新津县| 隆尧县| 化隆| 都昌县| 淄博市| 开封县| 华容县| 周至县| 内乡县| 青铜峡市| 进贤县| 昌乐县| 吴桥县| 治县。|