To apply concurrency, we simply use the threading module that we have been discussing to create separate threads to handle different web requests. Let's take a look at the Chapter05/example3.py file, as shown in the following code:
# Chapter05/example3.py
import threading import requests import time
def ping(url): res = requests.get(url) print(f'{url}: {res.text}')
In this example, we are including the sequential logic from the previous example to process our URL list, so that we can compare the improvement in speed when we apply threading to our ping test program. We are also creating a thread to ping each of the URLs in our URL list using the threading module; these threads will be executing independently from each other. Time taken to process the URLs sequentially and concurrently are also tracked using methods from the time module.
Run the program and your output should be similar to the following:
http://httpstat.us/200: 200 OK http://httpstat.us/400: 400 Bad Request http://httpstat.us/404: 404 Not Found http://httpstat.us/408: 408 Request Timeout http://httpstat.us/500: 500 Internal Server Error http://httpstat.us/524: 524 A timeout occurred Sequential: 0.82 seconds
http://httpstat.us/404: 404 Not Found http://httpstat.us/200: 200 OK http://httpstat.us/400: 400 Bad Request http://httpstat.us/500: 500 Internal Server Error http://httpstat.us/524: 524 A timeout occurred http://httpstat.us/408: 408 Request Timeout Threading: 0.14 seconds
While the specific time that the sequential logic and threading logic take to process all the URLs might be different from system to system, there should still be a clear distinction between the two. Specifically, here we can see that the threading logic was almost six times faster than the sequential logic (which corresponds to the fact that we had six threads processing six URLs in parallel). There is no doubt, then, that concurrency can provide significant speedup for our ping test application specifically and for the process of making web requests in general.