- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 248字
- 2021-07-02 12:46:33
Getting ready
To understand the intuition of performing text analysis, let's consider the Reuters dataset, where each news article is classified into one of the 46 possible topics.
We will adopt the following strategy to perform our analysis:
- Given that a dataset could contain thousands of unique words, we will shortlist the words that we shall consider.
- For this specific exercise, we shall consider the top 10,000 most frequent words.
- An alternative approach would be to consider the words that cumulatively constitute 80% of all words within a dataset. This ensures that all the rare words are excluded.
- Once the words are shortlisted, we shall one-hot-encode the article based on the constituent frequent words.
- Similarly, we shall one-hot-encode the output label.
- Each input now is a 10,000-dimensional vector, and the output is a 46-dimensional vector:
- We will divide the dataset into train and test datasets. However, in code, you will notice that we will be using the in-built dataset of reuters in Keras that has built-in function to identify the top n frequent words and split the dataset into train and test datasets.
- Map the input and output with a hidden layer in between.
- We will perform softmax at the output layer to obtain the probability of the input belonging to one of the 46 classes.
- Given that we have multiple possible outputs, we shall employ a categorical cross entropy loss function.
- We shall compile and fit the model to measure its accuracy on a test dataset.
推薦閱讀
- 大學計算機基礎(第二版)
- 零基礎玩轉區塊鏈
- Python爬蟲開發:從入門到實戰(微課版)
- INSTANT Sencha Touch
- Groovy for Domain:specific Languages(Second Edition)
- 深入淺出DPDK
- Python機器學習算法與實戰
- Learning R for Geospatial Analysis
- Learning Unreal Engine Android Game Development
- Mastering Linux Security and Hardening
- Drupal Search Engine Optimization
- Building UIs with Wijmo
- Microsoft XNA 4.0 Game Development Cookbook
- 深入理解MySQL主從原理
- Python機器學習(原書第3版)