官术网_书友最值得收藏!

Calculating statistics and reducing duplicates based on file contents

At first glance, calculating statistics based on the contents of a file might not be among the most interesting tasks one could accomplish with Bash scripting, however, it can be useful in several circumstances. Let's imagine that our program takes user input from several commands. We could calculate the length of the input to determine if it is too little or too much. Alternatively, we could also determine the size of a string to determine buffer sizes for a program written in another programming language (such as C/C++):

$ wc -c <<< "1234567890"
11 # Note there are 10 chars + a new line or carriage return \n
$ echo -n "1234567890" | wc -c
10
We can use commands like  wc to calculate the number of occurrences of words, total number of lines, and many other actions in conjunction to the functionality provided by your script.

Better yet, what if we used a command called strings to output all printable ASCII strings to a file? The strings program will output every occurrence of a stringeven if there are duplicates. Using other programs like sort and uniq (or a combination of the two), we can also sort the contents of a file and reduce duplicates if we wanted to calculate the number of unique lines within a file:

$ strings /bin/ls > unalteredoutput.txt
$ ls -lah unalteredoutput.txt
-rw-rw-r-- 1 rbrash rbrash 22K Nov 24 11:17 unalteredoutput.txt
$ strings /bin/ls | sort -u > sortedoutput.txt
$ ls -lah sortedoutput.txt
-rw-rw-r-- 1 rbrash rbrash 19K Nov 24 11:17 usortedoutput.txt

Now that we know a few basic premises of why we may need to perform some basic statistics, let's carry on with the recipe.

主站蜘蛛池模板: 宕昌县| 重庆市| 武宣县| 华阴市| 遂宁市| 德州市| 乌恰县| 康定县| 昌图县| 云龙县| 繁昌县| 班玛县| 姜堰市| 依兰县| 平利县| 同德县| 怀柔区| 衡东县| 仁怀市| 武清区| 潜山县| 东兰县| 潞西市| 红原县| 丰都县| 玉门市| 安图县| 汝阳县| 临汾市| 阿拉善右旗| 合水县| 津市市| 怀集县| 内乡县| 黄山市| 新乐市| 楚雄市| 秭归县| 昌江| 绥江县| 鄢陵县|