官术网_书友最值得收藏!

Downloading a web page

To scrape web pages, we first need to download them. Here is a simple Python script that uses Python's urllib module to download a URL:

import urllib.request
def download(url):
return urllib.request.urlopen(url).read()

When a URL is passed, this function will download the web page and return the HTML. The problem with this snippet is that, when downloading the web page, we might encounter errors that are beyond our control; for example, the requested page may no longer exist. In these cases, urllib will raise an exception and exit the script. To be safer, here is a more robust version to catch these exceptions:

import urllib.request
from urllib.error import URLError, HTTPError, ContentTooShortError

def download(url):
print('Downloading:', url)
try:
html = urllib.request.urlopen(url).read()
except (URLError, HTTPError, ContentTooShortError) as e:
print('Download error:', e.reason)
html = None
return html

Now, when a download or URL error is encountered, the exception is caught and the function returns None.

Throughout this book, we will assume you are creating files with code that is presented without prompts (like the code above). When you see code that begins with a Python prompt >>> or and IPython prompt In [1]:, you will need to either enter that into the main file you have been using, or save the file and import those functions and classes into your Python interpreter. If you run into any issues, please take a look at the code in the book repository at https://github.com/kjam/wswp.
主站蜘蛛池模板: 寿宁县| 鸡东县| 霍山县| 胶州市| 宁海县| 南木林县| 罗江县| 广丰县| 黄骅市| 惠州市| 根河市| 紫金县| 漠河县| 永新县| 镇宁| 义乌市| 时尚| 安远县| 伊宁市| 金乡县| 松溪县| 米林县| 房产| 法库县| 壶关县| 锡林浩特市| 彭州市| 吴川市| 司法| 都兰县| 木里| 高雄县| 富蕴县| 镇康县| 通许县| 西宁市| 兴文县| 防城港市| 安化县| 凤城市| 和硕县|