官术网_书友最值得收藏!

Understanding how to perform HTTP GET requests

One of the most resourceful places to find good data is online. GET requests are common methods of communicating with an HTTP web server. In this recipe, we will grab all the links from a Wikipedia article and print them to the terminal. To easily grab all the links, we will use a helpful library called HandsomeSoup, which lets us easily manipulate and traverse a webpage through CSS selectors.

Getting ready

We will be collecting all links from a Wikipedia web page. Make sure to have an Internet connection before running this recipe.

Install the HandsomeSoup CSS selector package, and also install the HXT library if it is not already installed. To do this, use the following commands:

$ cabal install HandsomeSoup
$ cabal install hxt

How to do it...

  1. This recipe requires hxt for parsing HTML and requires HandsomeSoup for the easy-to-use CSS selectors, as shown in the following code snippet:
    import Text.XML.HXT.Core
    import Text.HandsomeSoup
  2. Define and implement main as follows:
    main :: IO ()
    main = do
  3. Pass in the URL as a string to HandsomeSoup's fromUrl function:
        let doc = fromUrl "http://en.wikipedia.org/wiki/Narwhal"
  4. Select all links within the bodyContent field of the Wikipedia page as follows:
        links <- runX $ doc >>> css "#bodyContent a" ! "href"
        print links

How it works…

The HandsomeSoup package allows easy CSS selectors. In this recipe, we run the #bodyContent a selector on a Wikipedia article web page. This finds all link tags that are descendants of an element with the bodyContent ID.

See also…

Another common way to obtain data online is through POST requests. To find out more, refer to the Learning how to perform HTTP POST requests recipe.

主站蜘蛛池模板: 抚远县| 田东县| 信丰县| 宁化县| 金塔县| 凤台县| 桃园市| 齐齐哈尔市| 岳普湖县| 商河县| 嘉荫县| 海阳市| 噶尔县| 海门市| 城口县| 凉山| 临漳县| 鱼台县| 雅江县| 永寿县| 西盟| 辉南县| 武义县| 岱山县| 介休市| 和平区| 皮山县| 台南县| 墨江| 普兰县| 建始县| 响水县| 子长县| 韶山市| 大石桥市| 馆陶县| 剑河县| 泗洪县| 墨竹工卡县| 孝昌县| 称多县|