官术网_书友最值得收藏!

How to do it...

In the following steps, we will enumerate all the 4-grams of a sample file and select the 50 most frequent ones:

  1. We begin by importing the collections library to facilitate counting and the ngrams library from nltk to ease extraction of N-grams:
import collections
from nltk import ngrams
  1. We specify which file we would like to analyze:
file_to_analyze = "python-3.7.2-amd64.exe"
  1. We define a convenience function to read in a file's bytes:
def read_file(file_path):
"""Reads in the binary sequence of a binary file."""
with open(file_path, "rb") as binary_file:
data = binary_file.read()
return data
  1. We write a convenience function to take a byte sequence and obtain N-grams:
def byte_sequence_to_Ngrams(byte_sequence, N):
"""Creates a list of N-grams from a byte sequence."""
Ngrams = ngrams(byte_sequence, N)
return list(Ngrams)

  1. We write a function to take a file and obtain its count of N-grams:
def binary_file_to_Ngram_counts(file, N):
"""Takes a binary file and outputs the N-grams counts of its binary sequence."""
filebyte_sequence = read_file(file)
file_Ngrams = byte_sequence_to_Ngrams(filebyte_sequence, N)
return collections.Counter(file_Ngrams)
  1. We specify that our desired value is N=4 and obtain the counts of all 4-grams in the file:
extracted_Ngrams = binary_file_to_Ngram_counts(file_to_analyze, 4)
  1. We list the 10 most common 4-grams of our file:
print(extracted_Ngrams.most_common(10))

The result is as follows:

[((0, 0, 0, 0), 24201), ((139, 240, 133, 246), 1920), ((32, 116, 111, 32), 1791), ((255, 255, 255, 255), 1663), ((108, 101, 100, 32), 1522), ((100, 32, 116, 111), 1519), ((97, 105, 108, 101), 1513), ((105, 108, 101, 100), 1513), ((70, 97, 105, 108), 1505), ((101, 100, 32, 116), 1503)]
主站蜘蛛池模板: 酉阳| 常熟市| 偏关县| 阳泉市| 云南省| 长寿区| 那坡县| 尼玛县| 边坝县| 祥云县| 松桃| 无极县| 呼玛县| 武隆县| 枣庄市| 塘沽区| 南康市| 内江市| 余姚市| 岱山县| 逊克县| 湖口县| 治多县| 松滋市| 和田县| 濮阳市| 西充县| 临夏县| 黄山市| 洪湖市| 咸宁市| 军事| 新建县| 苍南县| 仲巴县| 罗山县| 克什克腾旗| 明光市| 英德市| 桐柏县| 哈巴河县|