- Hands-On Artificial Intelligence on Amazon Web Services
- Subhashini Tripuraneni Charles Song
- 1415字
- 2021-06-24 12:48:47
Developing an AI application locally using AWS Chalice
First, let's implement the private APIs and services that provide common capabilities. We will have two services; both of them should be created in the chalicelib directory:
- StorageService: The StorageService class that's implemented in the storage_service.py file connects to AWS S3 via boto3 to perform tasks on files we need for the applications.
Let's implement StorageService, as follows:
import boto3
class StorageService:
def __init__(self, storage_location):
self.client = boto3.client('s3')
self.bucket_name = storage_location
def get_storage_location(self):
return self.bucket_name
def list_files(self):
response = self.client.list_objects_v2(Bucket = self.bucket_name)
files = []
for content in response['Contents']:
files.append({
'location': self.bucket_name,
'file_name': content['Key'],
'url': "http://" + self.bucket_name + ".s3.amazonaws.com/" + content['Key']
})
return files
In the class, there is currently a constructor and two methods:
-
- The __init__() constructor takes a parameter, storage_location. In this implementation of StorageService, storage_location represents the S3 bucket where files will be stored. However, we purposely gave this parameter a generic name so that different implementations of StorageService can use other storage services besides AWS S3.
- The first method, get_storage_location(), just returns the S3 bucket name as storage_location. Other service implementations will use this method to get the generic storage location.
- The second method, list_files(), retrieves a list of files from an S3 bucket specified by storage_location. The files in this bucket are then returned as a list of Python objects. Each object describes a file, including its location, filename, and URL.
Note that we are also describing the files using more generic terms, such as location, filename, and URL, rather than bucket, key, and s3 URL. In addition, we are returning a new Python list with our own JSON format, rather than returning the available response from boto3. This prevents AWS implementation details from leaking out of this private API's implementation.
The design decisions in StorageService are made to hide the implementation details from its clients. Because we are hiding the boto3 and S3 details, we are free to change StorageService so that we can use other SDKs or services to implement the file storage capabilities.
- RecognitionService: The RecognitionService class that's implemented in the recognition_service.py file calls the Amazon Rekognition service via boto3 to perform image and video analysis tasks.
Let's implement RecognitionService, as follows:
import boto3
class RecognitionService:
def __init__(self, storage_service):
self.client = boto3.client('rekognition')
self.bucket_name = storage_service.get_storage_location()
def detect_objects(self, file_name):
response = self.client.detect_labels(
Image = {
'S3Object': {
'Bucket': self.bucket_name,
'Name': file_name
}
}
)
objects = []
for label in response["Labels"]:
objects.append({
'label': label['Name'],
'confidence': label['Confidence']
})
return objects
In this class, it currently has a constructor and one method:
-
- The __init__() constructor takes in StorageService as a dependency to get the necessary files. This allows new implementations of StorageService to be injected and used by RecognitionService; that is, as long as the new implementations of StorageService implement the same API contract. This is known as the dependency injection design pattern, which makes software components more modular, reusable, and readable.
- The detect_objects() method takes in an image filename, including both the path and name portions, and then performs object detection on the specified image. This method implementation assumes that the image file is stored in an S3 bucket and calls Rekognition's detect_labels() function from the boto3 SDK. When the labels are returned by boto3, this method constructs a new Python list, with each item in the list describing an object that was detected and the confidence level of the detection.
Note that, from the method's signatures (the parameters and return value), it does not expose the fact that the S3 and Rekognition services are used. This is the same information-hiding practice we used in StorageService.
In RecognitionService, we could use the StorageService that was passed in through the constructor to get the actual image files and perform detection on the image files. Instead, we are directly passing the image files' buckets and names through the detect_labels() function. This latter implementation choice takes advantage of the fact that AWS S3 and Amazon Rekognition are nicely integrated. The important point is that the private API's contract allows both implementations, and our design decision picked the latter implementation.
- app.py: Next, let's implement the public APIs that are tailored for our image recognition web application. We only need one public API for the demo application. It should be implemented in the app.py file in the Chalice project structure.
Replace the existing contents of app.py with the following code block. Let's understand the components of the class:
-
- The demo_object_detection() function uses StorageService and RecognitionService to perform its tasks; therefore, we need to import these from chalicelib and create new instances of these services.
- storage_location is initialized to contents.aws.ai, which contains the image files we uploaded in the previous chapter. You should replace contents.aws.ai with your own S3 bucket.
- This function is annotated with @app.route('/demo-object-detection', cors = True). This is a special construct used by Chalice to define a RESTful endpoint with a URL path called /demo-object-detection:
- Chalice maps this endpoint to the demo_object_detection() Python function.
- The annotation also sets cors to true, which enables Cross-Origin Resource Sharing (CORS) by adding certain HTTP headers to the response of this endpoint. These extra HTTP headers tell a browser to let a web application running at one origin (domain) so that it has permission to access resources from a different origin (domain, protocol, or port number) other than its own. Let's have a look at the implementations in the following class:
from chalice import Chalice
from chalicelib import storage_service
from chalicelib import recognition_service
import random
#####
# chalice app configuration
#####
app = Chalice(app_name='Capabilities')
app.debug = True
#####
# services initialization
#####
storage_location = 'contents.aws.ai'
storage_service = storage_service.StorageService(storage_location)
recognition_service = recognition_service.RecognitionService(storage_service)
@app.route('/demo-object-detection', cors = True)
def demo_object_detection():
"""randomly selects one image to demo object detection"""
files = storage_service.list_files()
images = [file for file in files if file['file_name'].endswith(".jpg")]
image = random.choice(images)
objects = recognition_service.detect_objects(image['file_name'])
return {
'imageName': image['file_name'],
'imageUrl': image['url'],
'objects': objects
}
Let's talk about the preceding code in detail:
-
- The demo_object_detection() function gets a list of image files (files that have a .jpg extension) from StorageService and then randomly selects one of them to perform the object detection demo.
- Random selection is implemented here to simplify our demo application so that it only displays one image and its detection results.
- Once the image has been randomly selected, the function calls detect_objects() from RecognitionService and then generates an HTTP response in the JavaScript Object Notation (JSON) format.
- Chalice automatically wraps the response object in the proper HTTP headers, response code, and the JSON payload. The JSON format is part of the contract between our frontend and this public API.
We are ready to run and test the application's backend locally. Chalice provides a local mode, which includes a local HTTP server that you can use to test the endpoints.
- Start the chalice local mode within the pipenv virtual environment with the following commands:
$ cd Capabilities
$ chalice local
Restarting local dev server.
Found credentials in shared credentials file: ~/.aws/credentials
Serving on http://127.0.0.1:8000
Now, the local HTTP server is running at the address and port number in the Terminal output; that is, http://127.0.0.1:8000. Keep in mind that even though we are running the endpoint locally, the services that the endpoint calls are making requests to AWS via the boto3 SDK.
Chalice's local mode automatically detected the AWS credentials in the ~/.aws/credentials file. Our service implementations, which are using boto3, will use the key pairs that are found there and will issue requests with the corresponding user's permissions. If this user does not have permissions for S3 or Rekognition, the request to the endpoint will fail.
- We can now issue HTTP requests to the local server to test the /demo-object-detection endpoint. For example, you can use the Unix curl command as follows:
$ curl http://127.0.0.1:8000/demo-object-detection
{"imageName":"beagle_on_gravel.jpg","imageUrl":"https://contents.aws.ai.s3.amazonaws.com/beagle_on_gravel.jpg","objects":[{"label":"Pet","confidence":98.9777603149414},{"label":"Hound","confidence":98.9777603149414},{"label":"Canine","confidence":98.9777603149414},{"label":"Animal","confidence":98.9777603149414},{"label":"Dog","confidence":98.9777603149414},{"label":"Mammal","confidence":98.9777603149414},{"label":"Beagle","confidence":98.0347900390625},{"label":"Road","confidence":82.47952270507812},{"label":"Dirt Road","confidence":74.52912902832031},{"label":"Gravel","confidence":74.52912902832031}]}
Note that, in this code, we just append the endpoint URL path to the base address and port number where the local HTTP server is running. The request should return JSON output back from the local endpoint.
This is the JSON that our web user interface will receive and use to display the detection results to the user.
- Instant Raspberry Pi Gaming
- 集成架構(gòu)中型系統(tǒng)
- 大學(xué)計(jì)算機(jī)基礎(chǔ):基礎(chǔ)理論篇
- Dreamweaver CS3網(wǎng)頁制作融會(huì)貫通
- 傳感器技術(shù)應(yīng)用
- 樂高創(chuàng)意機(jī)器人教程(中級 下冊 10~16歲) (青少年iCAN+創(chuàng)新創(chuàng)意實(shí)踐指導(dǎo)叢書)
- 城市道路交通主動(dòng)控制技術(shù)
- Google SketchUp for Game Design:Beginner's Guide
- 空間站多臂機(jī)器人運(yùn)動(dòng)控制研究
- Photoshop行業(yè)應(yīng)用基礎(chǔ)
- 筆記本電腦使用與維護(hù)
- 網(wǎng)絡(luò)信息安全項(xiàng)目教程
- 網(wǎng)絡(luò)安全原理與應(yīng)用
- Raspberry Pi 3 Projects for Java Programmers
- EDA技術(shù)及其創(chuàng)新實(shí)踐(Verilog HDL版)