- Learning Python Web Penetration Testing
- Christian Martorella
- 153字
- 2021-06-25 20:54:48
Crawlers and spiders
Crawlers and spiders are used for mapping web applications, automating the task of cataloging all the content and functionality. The tool automatically crawls the application by following all the links it finds, submitting forms, analyzing the responses for new content, and repeating this process until it covers the whole application.
There are standalone crawlers and spiders such as Scrapy (http://scrapy.org), which are written in Python or command-line tools such as HT track (http://www.httrack.com). We have crawlers and spiders integrated with the proxies such as Burp and ZAP that will benefit from the content that has passed through the proxy to enrich knowledge about the app.
One good example on why this is valuable is when the application is heavy on JavaScript. Traditional crawlers won't interpret JS, but the browsers will. So, the proxy will see it and add it to the crawler catalog. We'll see Scrapy in more detail later.
- 流量的秘密:Google Analytics網站分析與優化技巧(第2版)
- Spring 5.0 By Example
- Full-Stack Vue.js 2 and Laravel 5
- PhoneGap Mobile Application Development Cookbook
- MySQL數據庫基礎實例教程(微課版)
- 組態軟件技術與應用
- FPGA Verilog開發實戰指南:基于Intel Cyclone IV(進階篇)
- Learning jQuery(Fourth Edition)
- 一塊面包板玩轉Arduino編程
- Oracle GoldenGate 12c Implementer's Guide
- Python大學實用教程
- Distributed Computing in Java 9
- Python網絡爬蟲實例教程(視頻講解版)
- Oracle Database XE 11gR2 Jump Start Guide
- XML程序設計(第二版)