官术网_书友最值得收藏!

Natural language processing using a hashing vectorizer and tf-idf with scikit-learn

We often find in data science that the objects we wish to analyze are textual. For example, they might be tweets, articles, or network logs. Since our algorithms require numerical inputs, we must find a way to convert such text into numerical features. To this end, we utilize a sequence of techniques.

A token is a unit of text. For example, we may specify that our tokens are words, sentences, or characters. A count vectorizer takes textual input and then outputs a vector consisting of the counts of the textual tokens. A hashing vectorizer is a variation on the count vectorizer that sets out to be faster and more scalable, at the cost of interpretability and hashing collisions. Though it can be useful, just having the counts of the words appearing in a document corpus can be misleading. The reason is that, often, unimportant words, such as the and a (known as stop words) have a high frequency of occurrence, and hence little informative content. For reasons such as this, we often give words different weights to offset this. The main technique for doing so is tf-idf, which stands for Term-Frequency, Inverse-Document-Frequency. The main idea is that we account for the number of times a term occurs, but discount it by the number of documents it occurs in.

In cybersecurity, text data is omnipresent; event logs, conversational transcripts, and lists of function names are just a few examples. Consequently, it is essential to be able to work with such data, something you'll learn in this recipe.

主站蜘蛛池模板: 宜良县| 东方市| 黄冈市| 高密市| 沙雅县| 济源市| 诸城市| 永靖县| 油尖旺区| 六安市| 万源市| 赤城县| 新蔡县| 大姚县| 南和县| 石景山区| 新密市| 阳山县| 石泉县| 宣武区| 鸡西市| 涟水县| 五河县| 苏州市| 文成县| 洪洞县| 苏尼特左旗| 宣化县| 邮箱| 屏边| 扬州市| 肃北| 无棣县| 奉新县| 高碑店市| 武定县| 桓台县| 农安县| 板桥市| 揭东县| 乐亭县|