官术网_书友最值得收藏!

How it works

The program connects to SQS and opens the queue. Opening the queue for reading is also done using sqs.create_queue, which will simply return the queue if it already exists.

Then, it enters a loop calling sqs.receive_message, specifying the URL of the queue, the number of messages to receive in each read, and the maximum amount of time to wait in seconds if there are no messages available.

If a message is read, the URL in the message is retrieved and scraping techniques are used to read the page at the URL and extract the planet's name and information about its albedo.

Note that we retrieve the receipt handle of the message. This is needed to delete the message from the queue. If we do not delete the message, it will be made available in the queue after a period of time.   So if our scraper crashed and didn't perform this acknowledgement, the messages will be made available again by SQS for another scraper to process (or the same one when it is back up).

主站蜘蛛池模板: 集安市| 静乐县| 故城县| 确山县| 金沙县| 永善县| 金阳县| 满城县| 莲花县| 新兴县| 潜江市| 高安市| 呼和浩特市| 新建县| 洛宁县| 邯郸县| 永靖县| 马龙县| 古田县| 重庆市| 罗江县| 新巴尔虎右旗| 霍邱县| 巴南区| 茂名市| 叶城县| 林芝县| 游戏| 托里县| 滦平县| 彰化县| 汾西县| 兴海县| 滕州市| 大同县| 邵武市| 乐山市| 乐安县| 余干县| 油尖旺区| 遂昌县|