官术网_书友最值得收藏!

Avoiding spider traps

Currently, our crawler will follow any link it hasn't seen before. However, some websites dynamically generate their content and can have an infinite number of web pages. For example, if the website has an online calendar with links provided for the next month and year, then the next month will also have links to the next month, and so on for however long the widget is set (this can be a LONG time). The site may offer the same functionality with simple pagination navigation, essentially paginating over empty search result pages until the maximum pagination is reached. This situation is known as a spider trap.

A simple way to avoid getting stuck in a spider trap is to track how many links have been followed to reach the current web page, which we will refer to as depth. Then, when a maximum depth is reached, the crawler does not add links from that web page to the queue. To implement maximum depth, we will change the seen variable, which currently tracks visited web pages, into a dictionary to also record the depth the links were found at:

def link_crawler(..., max_depth=4): 
seen = {}
...
if rp.can_fetch(user_agent, url):
depth = seen.get(url, 0)
if depth == max_depth:
print('Skipping %s due to depth' % url)
continue
...
for link in get_links(html):
if re.match(link_regex, link):
abs_link = urljoin(start_url, link)
if abs_link not in seen:
seen[abs_link] = depth + 1
crawl_queue.append(abs_link)

Now, with this feature, we can be confident the crawl will complete eventually. To disable this feature, max_depth can be set to a negative number so the current depth will never be equal to it.

主站蜘蛛池模板: 积石山| 乐东| 吴江市| 长岭县| 金溪县| 通城县| 秦皇岛市| 资兴市| 获嘉县| 宁陵县| 虎林市| 兰溪市| 津南区| 昌吉市| 定兴县| 双峰县| 石林| 彩票| 合山市| 天门市| 右玉县| 昭觉县| 新乡县| 台南县| 隆安县| 鄂伦春自治旗| 专栏| 阳山县| 临西县| 长沙市| 胶南市| 凉山| 中山市| 白水县| 文化| 郎溪县| 永善县| 定边县| 赤峰市| 汝城县| 北碚区|