官术网_书友最值得收藏!

Downloading a web page

To scrape web pages, we first need to download them. Here is a simple Python script that uses Python's urllib module to download a URL:

import urllib.request
def download(url):
return urllib.request.urlopen(url).read()

When a URL is passed, this function will download the web page and return the HTML. The problem with this snippet is that, when downloading the web page, we might encounter errors that are beyond our control; for example, the requested page may no longer exist. In these cases, urllib will raise an exception and exit the script. To be safer, here is a more robust version to catch these exceptions:

import urllib.request
from urllib.error import URLError, HTTPError, ContentTooShortError

def download(url):
print('Downloading:', url)
try:
html = urllib.request.urlopen(url).read()
except (URLError, HTTPError, ContentTooShortError) as e:
print('Download error:', e.reason)
html = None
return html

Now, when a download or URL error is encountered, the exception is caught and the function returns None.

Throughout this book, we will assume you are creating files with code that is presented without prompts (like the code above). When you see code that begins with a Python prompt >>> or and IPython prompt In [1]:, you will need to either enter that into the main file you have been using, or save the file and import those functions and classes into your Python interpreter. If you run into any issues, please take a look at the code in the book repository at https://github.com/kjam/wswp.
主站蜘蛛池模板: 习水县| 泗洪县| 嘉兴市| 莲花县| 平塘县| 佛山市| 信宜市| 双流县| 盐城市| 中方县| 莱芜市| 济南市| 泸定县| 涪陵区| 临颍县| 武平县| 土默特右旗| 肇源县| 建德市| 浦县| 凤庆县| 同心县| 盐边县| 关岭| 新竹县| 南漳县| 子长县| 浦东新区| 女性| 延寿县| 广安市| 健康| 广西| 宣汉县| 柳河县| 柘城县| 西安市| 陇南市| 葵青区| 应用必备| 凌海市|