官术网_书友最值得收藏!

Crawlers and spiders

Crawlers and spiders are used for mapping web applications, automating the task of cataloging all the content and functionality. The tool automatically crawls the application by following all the links it finds, submitting forms, analyzing the responses for new content, and repeating this process until it covers the whole application.

There are standalone crawlers and spiders such as Scrapy (http://scrapy.org), which are written in Python or command-line tools such as HT track (http://www.httrack.com). We have crawlers and spiders integrated with the proxies such as Burp and ZAP that will benefit from the content that has passed through the proxy to enrich knowledge about the app.

One good example on why this is valuable is when the application is heavy on JavaScript. Traditional crawlers won't interpret JS, but the browsers will. So, the proxy will see it and add it to the crawler catalog. We'll see Scrapy in more detail later.

主站蜘蛛池模板: 江都市| 周口市| 房产| 罗源县| 聂荣县| 佛学| 河北省| 大姚县| 如皋市| 南雄市| 黄平县| 东方市| 新昌县| 科技| 衡水市| 巨野县| 固镇县| 吕梁市| 河北区| 东兰县| 安溪县| 宁安市| 图木舒克市| 金溪县| 溧阳市| 织金县| 巨鹿县| 东至县| 罗田县| 北流市| 平安县| 拜泉县| 唐河县| 昭觉县| 婺源县| 永靖县| 德格县| 腾冲县| 台山市| 辉南县| 高平市|