官术网_书友最值得收藏!

Optimizing the Storing and Processing of Data for Machine Learning Problems

All of the preceding uses for artificial intelligence rely heavily on optimized data storage and processing. Optimization is necessary for machine learning because the data size can be huge, as seen in the following examples:

  • A single X-ray file can be many gigabytes in size.
  • Translation corpora (large collections of texts) can reach billions of sentences.
  • YouTube's stored data is measured in exabytes.
  • Financial data might seem like just a few numbers; these are generated in such large quantities per second that the New York Stock Exchange generates 1 TB of data daily.

While every machine learning system is unique, in many systems, data touches the same components. In a hypothetical machine learning system, data might be dealt with as follows:

Figure 1.4: Hardware used in a hypothetical machine learning system

Each of these is a highly specialized piece of hardware, and although not all of them store data for long periods in the way traditional hard disks or tape backups do, it is important to know how data storage can be optimized at each stage. Let's pe into a text classification AI project to see how optimizations can be applied at some stages.

主站蜘蛛池模板: 九龙县| 哈巴河县| 云和县| 新郑市| 巴青县| 莆田市| 天水市| 安达市| 东乡族自治县| 呼图壁县| 长乐市| 文山县| 康保县| 星子县| 高要市| 新津县| 贡山| 哈巴河县| 康马县| 五家渠市| 客服| 宁夏| 乌审旗| 宁都县| 丽水市| 四平市| 汉沽区| 平罗县| 宜都市| 连云港市| 和顺县| 永靖县| 阿坝县| 江山市| 仁寿县| 沛县| 鲁甸县| 河曲县| 屯昌县| 云林县| 甘南县|