官术网_书友最值得收藏!

Avoiding spider traps

Currently, our crawler will follow any link it hasn't seen before. However, some websites dynamically generate their content and can have an infinite number of web pages. For example, if the website has an online calendar with links provided for the next month and year, then the next month will also have links to the next month, and so on for however long the widget is set (this can be a LONG time). The site may offer the same functionality with simple pagination navigation, essentially paginating over empty search result pages until the maximum pagination is reached. This situation is known as a spider trap.

A simple way to avoid getting stuck in a spider trap is to track how many links have been followed to reach the current web page, which we will refer to as depth. Then, when a maximum depth is reached, the crawler does not add links from that web page to the queue. To implement maximum depth, we will change the seen variable, which currently tracks visited web pages, into a dictionary to also record the depth the links were found at:

def link_crawler(..., max_depth=4): 
seen = {}
...
if rp.can_fetch(user_agent, url):
depth = seen.get(url, 0)
if depth == max_depth:
print('Skipping %s due to depth' % url)
continue
...
for link in get_links(html):
if re.match(link_regex, link):
abs_link = urljoin(start_url, link)
if abs_link not in seen:
seen[abs_link] = depth + 1
crawl_queue.append(abs_link)

Now, with this feature, we can be confident the crawl will complete eventually. To disable this feature, max_depth can be set to a negative number so the current depth will never be equal to it.

主站蜘蛛池模板: 安阳市| 莱阳市| 曲水县| 扬州市| 竹山县| 兖州市| 崇左市| 潼关县| 墨玉县| 海兴县| 安新县| 岫岩| 墨竹工卡县| 阿拉善右旗| 巫溪县| 长宁县| 昆山市| 敦煌市| 内黄县| 灵台县| 永安市| 合阳县| 台安县| 宝山区| 荣昌县| 乌苏市| 许昌县| 普兰县| 定州市| 古蔺县| 四子王旗| 浮山县| 嘉禾县| 陇川县| 灌南县| 满洲里市| 开化县| 兴业县| 石阡县| 遂溪县| 疏附县|