官术网_书友最值得收藏!

Natural language processing using a hashing vectorizer and tf-idf with scikit-learn

We often find in data science that the objects we wish to analyze are textual. For example, they might be tweets, articles, or network logs. Since our algorithms require numerical inputs, we must find a way to convert such text into numerical features. To this end, we utilize a sequence of techniques.

A token is a unit of text. For example, we may specify that our tokens are words, sentences, or characters. A count vectorizer takes textual input and then outputs a vector consisting of the counts of the textual tokens. A hashing vectorizer is a variation on the count vectorizer that sets out to be faster and more scalable, at the cost of interpretability and hashing collisions. Though it can be useful, just having the counts of the words appearing in a document corpus can be misleading. The reason is that, often, unimportant words, such as the and a (known as stop words) have a high frequency of occurrence, and hence little informative content. For reasons such as this, we often give words different weights to offset this. The main technique for doing so is tf-idf, which stands for Term-Frequency, Inverse-Document-Frequency. The main idea is that we account for the number of times a term occurs, but discount it by the number of documents it occurs in.

In cybersecurity, text data is omnipresent; event logs, conversational transcripts, and lists of function names are just a few examples. Consequently, it is essential to be able to work with such data, something you'll learn in this recipe.

主站蜘蛛池模板: 普定县| 平遥县| 永昌县| 同德县| 德阳市| 罗田县| 华安县| 兴隆县| 荣昌县| 东兰县| 龙岩市| 札达县| 卫辉市| 秀山| 凤凰县| 青海省| 仁布县| 岳普湖县| 冷水江市| 武威市| 锦州市| 荆门市| 江川县| 汾西县| 枣庄市| 剑阁县| 府谷县| 科技| 关岭| 容城县| 安福县| 犍为县| 老河口市| 达孜县| 雷州市| 漯河市| 沂源县| 扎鲁特旗| 黄浦区| 龙州县| 长子县|