官术网_书友最值得收藏!

First project with the AWS SDK

Now, let's write our first Python application that will detect the objects in the images that are stored in an S3 bucket. To do this, we will leverage boto3 to interact with both the Amazon S3 service and the Amazon Rekognition service:

You can use any text editor, or your favorite Python Integrated Development Environment ( IDE), to create the Python source files. If you don't have a preference, we recommend that you check out JetBrains PyCharm, https://www.jetbrains.com/pycharm/, a cross-platform Python IDE that provides code editing, code analysis, a graphical debugger, an integrated unit tester, and integration with a version control system.
  1. The first source file that we will create is storage_service.py. Create this source file in the ObjectDetectionDemo directory. The following is the Python code for storage_service.py:
import boto3

class StorageService:
def __init__(self):
self.s3 = boto3.resource('s3')

def get_all_files(self, storage_location):
return self.s3.Bucket(storage_location).objects.all()

In this code, please note the following information:

    • storage_service.py contains a Python class, StorageService, that encapsulates the business logic of interacting with Amazon S3.
    • This class implements just one method, get_all_files(), which returns all of the objects stored within a bucket specified by the storage_location parameter.
    • Other functionalities related to Amazon S3 can also be implemented in this file, such as listing the buckets, uploading files to buckets, and so on.
  1. The next source file that we will create is recognition_service.py. Create this source file in the ObjectDetectionDemo directory as well. The following is the Python code for recognition_service.py:
import boto3

class RecognitionService:
def __init__(self):
self.client = boto3.client('rekognition')

def detect_objects(self, storage_location, image_file):
response = self.client.detect_labels(
Image = {
'S3Object': {
'Bucket': storage_location,
'Name': image_file
}
}
)

return response['Labels']

In this code, please note the following information:

    • recognition_service.py contains a Python class, RecognitionService, that encapsulates the business logic of interacting with the Amazon Rekognition service.
    • This class implements just one method, detect_objects(), that calls Rekognition's detect label API, and then returns the labels from the response.
    • Callers of this method can specify the S3 bucket and the filename with the storage_location and image_file parameters, respectively.
    • Other functionalities related to Amazon Rekognition can also be implemented in this file, such as detecting text, analyzing face, and so on.
  1. The final file that we will create is object_detection_demo.py. Create this source file in the ObjectDetectionDemo directory. The following is the Python code for object_detection_demo.py:
from storage_service import StorageService
from recognition_service import RecognitionService

storage_service = StorageService()
recognition_service = RecognitionService()

bucket_name = 'contents.aws.ai'

for file in storage_service.get_all_files(bucket_name):
if file.key.endswith('.jpg'):
print('Objects detected in image ' + file.key + ':')
labels = recognition_service.detect_objects(file.bucket_name, file.key)

for label in labels:
print('-- ' + label['Name'] + ': ' + str(label['Confidence']))

In this code, object_detection_demo.py is a Python script that brings together our two service implementations in order to perform object detection on the images that are stored in our S3 bucket.

Here is the interaction diagram that depicts the flow of the demo application:

Please note the following information, all of which is shown in the preceding diagram:

  • This script calls the StorageService to get all of the JPG image files that are stored in the contents.aws.ai bucket (you should replace this with your own bucket).
  • Here, we are hardcoding the bucket name for simplicity, but you can take in the bucket name as a parameter in order to make the script more generic.
  • Then, for each image in the specified bucket, the script calls our RecognitionService to perform object detection and returns the labels that are found.
  • The script also formats and prints out the labels, along with their confidence scores for the objects that were detected.

Note that we are using boto3 in both StorageService and RecognitionService. The boto3 objects manage the sessions between our project code and the AWS services. These sessions are created using the available credentials in the runtime environment. If you are running the script on your local development machine, then the AWS access key pair is taken from the ~/.aws/credentials file. We will cover how the credentials are used in other runtime environments in later chapters.

For simplicity, we kept the project code relatively short and simple. We will enhance these Python classes in later hands-on projects.
Even though this is only a demo project, it is still good practice to organize the code into different components with separation of concerns. In this project, all of the business logic that will interact with the Amazon S3 service are encapsulated within the StorageService class; the same is done for all the logic that will interact with the Amazon Rekognition service in the RecogntionService class. We will see more benefits of this design practice as our projects get larger and more complex.
  1. Now, let's run the following script in the Python virtual environment by entering the virtual environment shell:
$ pipenv shell

In this command, please note the following information:

    • This command starts a shell with the Python virtual environment within your normal Terminal shell.
    • Within the virtual environment shell, the Python version and the packages that we specified and installed with pipenv are available to our script.
  1. Within the virtual environment, invoke the object_detection_demo.py script via the following command:
$ python object_detection_demo.py

The output of this command should display the objects that are detected in the images that are stored in the specified S3 bucket:

Objects detected in image animal-beagle-canine-460823.jpg:
-- Pet: 98.9777603149414
-- Hound: 98.9777603149414
-- Canine: 98.9777603149414
-- Animal: 98.9777603149414
-- Dog: 98.9777603149414
-- Mammal: 98.9777603149414
-- Beagle: 98.0347900390625
-- Road: 82.47952270507812
-- Dirt Road: 74.52912902832031
-- Gravel: 74.52912902832031
  1. Remember to exit the virtual environment and to return to your normal Terminal shell with the exit command:
$ exit

Congratulations, you just created your first intelligent-enabled application that leverages the power of AI to perform image analysis on the AWS platform! Sit back and think about it; with just a few lines of code, you were able to create a piece of software that can detect and identify countless numbers of objects in our world. This is the quick lift you can get when leveraging AWS AI services.

主站蜘蛛池模板: 雷波县| 和林格尔县| 梓潼县| 万山特区| 鹤山市| 敦煌市| 广州市| 宜春市| 黔东| 东乌珠穆沁旗| 山东省| 宝山区| 贵港市| 龙口市| 蛟河市| 巴林右旗| 武清区| 淳安县| 五寨县| 五寨县| 黄平县| 吉安县| 南和县| 兴业县| 手机| 渝北区| 繁昌县| 勐海县| 孙吴县| 庆云县| 宜城市| 崇州市| 图木舒克市| 咸丰县| 日土县| 沭阳县| 延津县| 汝城县| 纳雍县| 临夏市| 手游|