- Python 3 Text Processing with NLTK 3 Cookbook
- Jacob Perkins
- 705字
- 2021-09-03 09:45:36
Discovering word collocations
Collocations are two or more words that tend to appear frequently together, such as United States. Of course, there are many other words that can come after United, such as United Kingdom and United Airlines. As with many aspects of natural language processing, context is very important. And for collocations, context is everything!
In the case of collocations, the context will be a document in the form of a list of words. Discovering collocations in this list of words means that we'll find common phrases that occur frequently throughout the text. For fun, we'll start with the script for Monty Python and the Holy Grail.
Getting ready
The script for Monty Python and the Holy Grail is found in the webtext
corpus, so be sure that it's unzipped at nltk_data/corpora/webtext/
.
How to do it...
We're going to create a list of all lowercased words in the text, and then produce BigramCollocationFinder
, which we can use to find bigrams, which are pairs of words. These bigrams are found using association measurement functions in the nltk.met
rics
package, as follows:
>>> from nltk.corpus import webtext >>> from nltk.collocations import BigramCollocationFinder >>> from nltk.metrics import BigramAssocMeasures >>> words = [w.lower() for w in webtext.words('grail.txt')] >>> bcf = BigramCollocationFinder.from_words(words) >>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4) [("'", 's'), ('arthur', ':'), ('#', '1'), ("'", 't')]
Well, that's not very useful! Let's refine it a bit by adding a word filter to remove punctuation and stopwords:
>>> from nltk.corpus import stopwords >>> stopset = set(stopwords.words('english')) >>> filter_stops = lambda w: len(w) < 3 or w in stopset >>> bcf.apply_word_filter(filter_stops) >>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4) [('black', 'knight'), ('clop', 'clop'), ('head', 'knight'), ('mumble', 'mumble')]
Much better, we can clearly see four of the most common bigrams in Monty Python and the Holy Grail. If you'd like to see more than four, simply increase the number to whatever you want, and the collocation finder will do its best.
How it works...
BigramCollocationFinder
constructs two frequency distributions: one for each word, and another for bigrams. A frequency distribution, or FreqDist
in NLTK, is basically an enhanced Python dictionary where the keys are what's being counted, and the values are the counts. Any filtering functions that are applied reduce the size of these two FreqDists
by eliminating any words that don't pass the filter. By using a filtering function to eliminate all words that are one or two characters, and all English stopwords, we can get a much cleaner result. After filtering, the collocation finder is ready to accept a generic scoring function for finding collocations.
There's more...
In addition to BigramCollocationFinder
, there's also TrigramCollocationFinder
, which finds triplets instead of pairs. This time, we'll look for trigrams in Australian singles advertisements with the help of the following code:
>>> from nltk.collocations import TrigramCollocationFinder >>> from nltk.metrics import TrigramAssocMeasures >>> words = [w.lower() for w in webtext.words('singles.txt')] >>> tcf = TrigramCollocationFinder.from_words(words) >>> tcf.apply_word_filter(filter_stops) >>> tcf.apply_freq_filter(3) >>> tcf.nbest(TrigramAssocMeasures.likelihood_ratio, 4) [('long', 'term', 'relationship')]
Now, we don't know whether people are looking for a long-term relationship or not, but clearly it's an important topic. In addition to the stopword filter, I also applied a frequency filter, which removed any trigrams that occurred less than three times. This is why only one result was returned when we asked for four because there was only one result that occurred more than two times.
Scoring functions
There are many more scoring functions available besides likelihood_ratio()
. But other than raw_freq()
, you may need a bit of a statistics background to understand how they work. Consult the NLTK API documentation for NgramAssocMeasures
in the nltk.metrics
package to see all the possible scoring functions.
Scoring ngrams
In addition to the nbest()
method, there are two other ways to get ngrams (a generic term used for describing bigrams and trigrams) from a collocation finder:
above_score(score_fn, min_score)
: This can be used to get all ngrams with scores that are at leastmin_score
. Themin_score
value that you choose will depend heavily on thescore_fn
you use.score_ngrams(score_fn)
: This will return a list with tuple pairs of (ngram, score). This can be used to inform your choice formin_score
.
See also
The nltk.metrics
module will be used again in the Measuring precision and recall of a classifier and Calculating high information words recipes in Chapter 7, Text Classification.
- Testing with JUnit
- The Android Game Developer's Handbook
- Java Web基礎與實例教程(第2版·微課版)
- PyQt從入門到精通
- Python程序設計案例教程
- Nginx Essentials
- Java軟件開發基礎
- Troubleshooting PostgreSQL
- 微信小程序開發解析
- Learning Apache Mahout Classification
- Visual Basic程序設計上機實驗教程
- OpenResty完全開發指南:構建百萬級別并發的Web應用
- Zabbix Performance Tuning
- SwiftUI極簡開發
- Python一行流:像專家一樣寫代碼