官术网_书友最值得收藏!

Getting ready

To understand the intuition of performing text analysis, let's consider the Reuters dataset, where each news article is classified into one of the 46 possible topics.

We will adopt the following strategy to perform our analysis:

  • Given that a dataset could contain thousands of unique words, we will shortlist the words that we shall consider.
  •  For this specific exercise, we shall consider the top 10,000 most frequent words.
  • An alternative approach would be to consider the words that cumulatively constitute 80% of all words within a dataset. This ensures that all the rare words are excluded.
  • Once the words are shortlisted, we shall one-hot-encode the article based on the constituent frequent words.
  • Similarly, we shall one-hot-encode the output label.
  • Each input now is a 10,000-dimensional vector, and the output is a 46-dimensional vector:
  • We will divide the dataset into train and test datasets. However, in code, you will notice that we will be using the in-built dataset of reuters in Keras that has built-in function to identify the top n frequent words and split the dataset into train and test datasets.
  • Map the input and output with a hidden layer in between.
  • We will perform softmax at the output layer to obtain the probability of the input belonging to one of the 46 classes.
  • Given that we have multiple possible outputs, we shall employ a categorical cross entropy loss function.
  • We shall compile and fit the model to measure its accuracy on a test dataset.
主站蜘蛛池模板: 博罗县| 中山市| 龙川县| 南充市| 阿拉善左旗| 修武县| 民乐县| 蒙城县| 北川| 波密县| 合阳县| 怀来县| 嘉荫县| 茌平县| 永嘉县| 万源市| 石泉县| 嘉黎县| 荥经县| 云南省| 搜索| 石泉县| 丹凤县| 普格县| 定安县| 即墨市| 清水河县| 富蕴县| 金昌市| 五大连池市| 平和县| 开平市| 慈溪市| 宝应县| 昌都县| 济南市| 临汾市| 新化县| 盱眙县| 德格县| 五河县|