官术网_书友最值得收藏!

Understanding how to perform HTTP GET requests

One of the most resourceful places to find good data is online. GET requests are common methods of communicating with an HTTP web server. In this recipe, we will grab all the links from a Wikipedia article and print them to the terminal. To easily grab all the links, we will use a helpful library called HandsomeSoup, which lets us easily manipulate and traverse a webpage through CSS selectors.

Getting ready

We will be collecting all links from a Wikipedia web page. Make sure to have an Internet connection before running this recipe.

Install the HandsomeSoup CSS selector package, and also install the HXT library if it is not already installed. To do this, use the following commands:

$ cabal install HandsomeSoup
$ cabal install hxt

How to do it...

  1. This recipe requires hxt for parsing HTML and requires HandsomeSoup for the easy-to-use CSS selectors, as shown in the following code snippet:
    import Text.XML.HXT.Core
    import Text.HandsomeSoup
  2. Define and implement main as follows:
    main :: IO ()
    main = do
  3. Pass in the URL as a string to HandsomeSoup's fromUrl function:
        let doc = fromUrl "http://en.wikipedia.org/wiki/Narwhal"
  4. Select all links within the bodyContent field of the Wikipedia page as follows:
        links <- runX $ doc >>> css "#bodyContent a" ! "href"
        print links

How it works…

The HandsomeSoup package allows easy CSS selectors. In this recipe, we run the #bodyContent a selector on a Wikipedia article web page. This finds all link tags that are descendants of an element with the bodyContent ID.

See also…

Another common way to obtain data online is through POST requests. To find out more, refer to the Learning how to perform HTTP POST requests recipe.

主站蜘蛛池模板: 乌鲁木齐县| 罗田县| 滨海县| 泾川县| 天柱县| 长武县| 山阳县| 富源县| 偃师市| 巴里| 博爱县| 闵行区| 同心县| 华安县| 康马县| 龙游县| 磴口县| 郑州市| 响水县| 永州市| 台北市| 萝北县| 寻乌县| 昭觉县| 和田县| 讷河市| 池州市| 德阳市| 永和县| 白城市| 扎兰屯市| 廊坊市| 鄱阳县| 哈巴河县| 新晃| 梓潼县| 册亨县| 青州市| 沅江市| 河南省| 连城县|