官术网_书友最值得收藏!

The basics of web requests

The worldwide capacity to generate data is estimated to double in size every two years. Even though there is an interdisciplinary field known as data science that is entirely dedicated to the study of data, almost every programming task in software development also has something to do with collecting and analyzing data. A significant part of this is, of course, data collection. However, the data that we need for our applications is sometimes not stored nicely and cleanly in a database—sometimes, we need to collect the data we need from web pages.

For example, web scraping is a data extraction method that automatically makes requests to web pages and downloads specific information. Web scraping allows us to comb through numerous websites and collect any data we need in a systematic and consistent manner—the collected data can be analyzed later on by our applications or simply saved on our computers in various formats. An example of this would be Google, which programs and runs numerous web scrapers of its own to find and index web pages for the search engine.

The Python language itself provides a number of good options for applications of this kind. In this chapter, we will mainly work with the requests module to make client-side web requests from our Python programs. However, before we look into this module in more detail, we need to understand some web terminology in order to be able to effectively design our applications.

主站蜘蛛池模板: 中超| 吉隆县| 珲春市| 正蓝旗| 宽甸| 大关县| 武城县| 台中县| 嘉鱼县| 天台县| 定边县| 谷城县| 三台县| 门源| 任丘市| 芦溪县| 江安县| 咸丰县| 安康市| 通江县| 巴马| 眉山市| 汉源县| 宜城市| 平顺县| 特克斯县| 沙雅县| 大宁县| 新丰县| 宜兴市| 日照市| 定远县| 屏南县| 郁南县| 贡山| 怀安县| 仙桃市| 平江县| 辽中县| 开封市| 平定县|