- Python Web Scraping(Second Edition)
- Katharine Jarmul Richard Lawson
- 111字
- 2021-07-09 19:42:44
Examining the Sitemap
Sitemap files are provided bywebsites to help crawlers locate their updated content without needing to crawl every web page. For further details, the sitemap standard is defined at http://www.sitemaps.org/protocol.html. Many web publishing platforms have the ability to generate a sitemap automatically. Here is the content of the Sitemap file located in the listed robots.txt file:
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>http://example.webscraping.com/view/Afghanistan-1</loc></url>
<url><loc>http://example.webscraping.com/view/Aland-Islands-2</loc></url>
<url><loc>http://example.webscraping.com/view/Albania-3</loc></url>
...
</urlset>
This sitemap provides links to all the web pages, which will be used in the next section to build our first crawler. Sitemap files provide an efficient way to crawl a website, but need to be treated carefully because they can be missing, out-of-date, or incomplete.
推薦閱讀
- 計算機網絡
- Oracle WebLogic Server 12c:First Look
- LabVIEW程序設計基礎與應用
- Vue.js 3.0源碼解析(微課視頻版)
- Instant Typeahead.js
- 數據結構習題精解(C語言實現+微課視頻)
- Scratch 3.0少兒編程與邏輯思維訓練
- Building Mapping Applications with QGIS
- Mastering Business Intelligence with MicroStrategy
- 51單片機C語言開發教程
- 編程與類型系統
- Creating Stunning Dashboards with QlikView
- Getting Started with React Native
- Buildbox 2.x Game Development
- Emotional Intelligence for IT Professionals