- Hands-On Artificial Intelligence on Amazon Web Services
- Subhashini Tripuraneni Charles Song
- 1005字
- 2021-06-24 12:48:45
First project with the AWS SDK
Now, let's write our first Python application that will detect the objects in the images that are stored in an S3 bucket. To do this, we will leverage boto3 to interact with both the Amazon S3 service and the Amazon Rekognition service:
- The first source file that we will create is storage_service.py. Create this source file in the ObjectDetectionDemo directory. The following is the Python code for storage_service.py:
import boto3
class StorageService:
def __init__(self):
self.s3 = boto3.resource('s3')
def get_all_files(self, storage_location):
return self.s3.Bucket(storage_location).objects.all()
In this code, please note the following information:
-
- storage_service.py contains a Python class, StorageService, that encapsulates the business logic of interacting with Amazon S3.
- This class implements just one method, get_all_files(), which returns all of the objects stored within a bucket specified by the storage_location parameter.
- Other functionalities related to Amazon S3 can also be implemented in this file, such as listing the buckets, uploading files to buckets, and so on.
- The next source file that we will create is recognition_service.py. Create this source file in the ObjectDetectionDemo directory as well. The following is the Python code for recognition_service.py:
import boto3
class RecognitionService:
def __init__(self):
self.client = boto3.client('rekognition')
def detect_objects(self, storage_location, image_file):
response = self.client.detect_labels(
Image = {
'S3Object': {
'Bucket': storage_location,
'Name': image_file
}
}
)
return response['Labels']
In this code, please note the following information:
-
- recognition_service.py contains a Python class, RecognitionService, that encapsulates the business logic of interacting with the Amazon Rekognition service.
- This class implements just one method, detect_objects(), that calls Rekognition's detect label API, and then returns the labels from the response.
- Callers of this method can specify the S3 bucket and the filename with the storage_location and image_file parameters, respectively.
- Other functionalities related to Amazon Rekognition can also be implemented in this file, such as detecting text, analyzing face, and so on.
- The final file that we will create is object_detection_demo.py. Create this source file in the ObjectDetectionDemo directory. The following is the Python code for object_detection_demo.py:
from storage_service import StorageService
from recognition_service import RecognitionService
storage_service = StorageService()
recognition_service = RecognitionService()
bucket_name = 'contents.aws.ai'
for file in storage_service.get_all_files(bucket_name):
if file.key.endswith('.jpg'):
print('Objects detected in image ' + file.key + ':')
labels = recognition_service.detect_objects(file.bucket_name, file.key)
for label in labels:
print('-- ' + label['Name'] + ': ' + str(label['Confidence']))
In this code, object_detection_demo.py is a Python script that brings together our two service implementations in order to perform object detection on the images that are stored in our S3 bucket.
Here is the interaction diagram that depicts the flow of the demo application:

Please note the following information, all of which is shown in the preceding diagram:
- This script calls the StorageService to get all of the JPG image files that are stored in the contents.aws.ai bucket (you should replace this with your own bucket).
- Here, we are hardcoding the bucket name for simplicity, but you can take in the bucket name as a parameter in order to make the script more generic.
- Then, for each image in the specified bucket, the script calls our RecognitionService to perform object detection and returns the labels that are found.
- The script also formats and prints out the labels, along with their confidence scores for the objects that were detected.
Note that we are using boto3 in both StorageService and RecognitionService. The boto3 objects manage the sessions between our project code and the AWS services. These sessions are created using the available credentials in the runtime environment. If you are running the script on your local development machine, then the AWS access key pair is taken from the ~/.aws/credentials file. We will cover how the credentials are used in other runtime environments in later chapters.
Even though this is only a demo project, it is still good practice to organize the code into different components with separation of concerns. In this project, all of the business logic that will interact with the Amazon S3 service are encapsulated within the StorageService class; the same is done for all the logic that will interact with the Amazon Rekognition service in the RecogntionService class. We will see more benefits of this design practice as our projects get larger and more complex.
- Now, let's run the following script in the Python virtual environment by entering the virtual environment shell:
$ pipenv shell
In this command, please note the following information:
-
- This command starts a shell with the Python virtual environment within your normal Terminal shell.
- Within the virtual environment shell, the Python version and the packages that we specified and installed with pipenv are available to our script.
- Within the virtual environment, invoke the object_detection_demo.py script via the following command:
$ python object_detection_demo.py
The output of this command should display the objects that are detected in the images that are stored in the specified S3 bucket:
Objects detected in image animal-beagle-canine-460823.jpg:
-- Pet: 98.9777603149414
-- Hound: 98.9777603149414
-- Canine: 98.9777603149414
-- Animal: 98.9777603149414
-- Dog: 98.9777603149414
-- Mammal: 98.9777603149414
-- Beagle: 98.0347900390625
-- Road: 82.47952270507812
-- Dirt Road: 74.52912902832031
-- Gravel: 74.52912902832031
- Remember to exit the virtual environment and to return to your normal Terminal shell with the exit command:
$ exit
Congratulations, you just created your first intelligent-enabled application that leverages the power of AI to perform image analysis on the AWS platform! Sit back and think about it; with just a few lines of code, you were able to create a piece of software that can detect and identify countless numbers of objects in our world. This is the quick lift you can get when leveraging AWS AI services.
- Mastering Mesos
- 基于C語言的程序設(shè)計(jì)
- 計(jì)算機(jī)應(yīng)用復(fù)習(xí)與練習(xí)
- STM32G4入門與電機(jī)控制實(shí)戰(zhàn):基于X-CUBE-MCSDK的無刷直流電機(jī)與永磁同步電機(jī)控制實(shí)現(xiàn)
- 永磁同步電動(dòng)機(jī)變頻調(diào)速系統(tǒng)及其控制(第2版)
- 邊緣智能:關(guān)鍵技術(shù)與落地實(shí)踐
- Excel 2007常見技法與行業(yè)應(yīng)用實(shí)例精講
- FPGA/CPLD應(yīng)用技術(shù)(Verilog語言版)
- 中文版AutoCAD 2013高手速成
- 智能鼠原理與制作(進(jìn)階篇)
- ADuC系列ARM器件應(yīng)用技術(shù)
- Hands-On DevOps
- 信息系統(tǒng)安全保障評(píng)估
- Mastering Android Game Development with Unity
- ARM嵌入式開發(fā)實(shí)例