官术网_书友最值得收藏!

Search engines

One well-known use case for web scraping is indexing websites for the purpose of building a search engine. In this case, a web scraper would visit different websites and follow references to other websites in order to discover all of the content available on the internet. By collecting some of the content from the pages, you could respond to search queries by matching the terms to the contents of the pages you have collected. You could also suggest similar pages if you track how pages are linked together, and rank the most important pages by the number of connections they have to other sites.

Googlebot is the most famous example of a web scraper used to build a search engine. It is the first step in building the search engine as it downloads, indexes, and ranks each page on a website. It will also follow links to other websites, which is how it is able to index a substantial portion of the internet. According to Googlebot's documentation, the scraper attempts to reach each web page every few seconds, which requires them to reach estimates of well into billions of pages per day!

If your goal is to build a search engine, albeit on a much smaller scale, you will find enough tools in this book to collect the information you need. This book will not, however, cover indexing and ranking pages to provide relevant search results.

主站蜘蛛池模板: 黔江区| 祁阳县| 温泉县| 株洲县| 会东县| 桐梓县| 商洛市| 会宁县| 新疆| 湘潭县| 平乐县| 阜康市| 临城县| 定襄县| 白沙| 三河市| 瑞丽市| 津市市| 芮城县| 聂拉木县| 保山市| 许昌县| 绍兴市| 甘南县| 手机| 万州区| 黄山市| 车致| 桑植县| 依兰县| 西贡区| 新乡县| 隆尧县| 丹巴县| 大田县| 巧家县| 桂东县| 平潭县| 广东省| 德州市| 云霄县|