- Hands-On Artificial Intelligence on Amazon Web Services
- Subhashini Tripuraneni Charles Song
- 773字
- 2021-06-24 12:48:48
Developing a demo application web user interface
Next, let's create a simple web user interface with HTML and JavaScript in the index.html and script.js files in the website directory.
Refer to the code in the index.html file, as follows:
<!doctype html>
<html lang="en"/>
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>Object Detector</title>
<link rel="stylesheet" >
<link rel="stylesheet" >
</head>
<body class="w3-theme-14" onload="runDemo()">
<div style="min-width:400px">
<div class="w3-bar w3-large w3-theme-d4">
<span class="w3-bar-item">Object Detector</span>
</div>
<div class="w3-container w3-content">
<p class="w3-opacity"><b>Randomly Selected Demo Image</b></p>
<div class="w3-panel w3-white w3-card w3-display-container"
style="overflow: hidden">
<div style="float: left;">
<img id="image" width="600"/>
</div>
<div id="objects" style="float: right;">
<h5>Objects Detected:</h5>
</div>
</div>
</div>
</div>
<script src="scripts.js"></script>
</body>
</html>
We are using standard HTML tags here, so the code of the web page should be easy to follow for anyone familiar with HTML. A few things worth pointing out are as follows:
- We are including a couple of Cascading Style Sheets (CSS) from www.w3schools.com to make our web interface a bit prettier than plain HTML. Most of the classes in the HTML tags are defined in these style sheets.
- The <img> tag with the image ID will be used to display the randomly selected demo image. This ID will be used by JavaScript to add the image dynamically.
- The <div> tag with the objects ID will be used to display the objects that were detected in the demo image. This ID will also be used by JavaScript to add the object labels and confidence levels dynamically.
- The scripts.js file is included toward the bottom of the HTML file. This adds the dynamic behaviors that were implemented in JavaScript to this HTML page.
- The runDemo() function from scripts.js is run when the HTML page is loaded in a browser. This is accomplished in the index.html page's <body> tag with the onload attribute.
Please refer to the code of the scripts.js file, as follows:
"use strict";
const serverUrl = "http://127.0.0.1:8000";
function runDemo() {
fetch(serverUrl + "/demo-object-detection", {
method: "GET"
}).then(response => {
if (!response.ok) {
throw response;
}
return response.json();
}).then(data => {
let imageElem = document.getElementById("image");
imageElem.src = data.imageUrl;
imageElem.alt = data.imageName;
let objectsElem = document.getElementById("objects");
let objects = data.objects;
for (let i = 0; i < objects.length; i++) {
let labelElem = document.createElement("h6");
labelElem.appendChild(document.createTextNode(
objects[i].label + ": " + objects[i].confidence + "%")
);
objectsElem.appendChild(document.createElement("hr"));
objectsElem.appendChild(labelElem);
}
}).catch(error => {
alert("Error: " + error);
});
}
Let's talk about the preceding code in detail:
- The script has only one function, runDemo(). This function makes an HTTP GET request to the /demo-object-detection endpoint running on the local HTTP server via the Fetch API that's available in JavaScript.
- If the response from the local endpoint is ok, then it converts the payload into a JSON object and passes it down to the next processing block.
- The runDemo() function then looks for an HTML element with the image ID, which is the <img> tag in HTML, and specifies the src attribute as the imageUrl returned by the endpoint. Remember, this imageUrl is set to the URL of the image file stored in S3. The <img> tag's alt attribute is set to imageName. imageName will be displayed to the user if the image cannot be loaded for some reason.
- Note that the image in S3 must be set to public readable in order for the website to display it. If you only see the alt text, double-check that the image is readable by the public.
- The runDemo() function then looks for an HTML element with the objects ID, which is a <div> tag, and appends a <h6> heading element for each object returned by the local endpoint, including each object's label and detection confidence level.
Now, we are ready to see this website in action. To run the website locally, simply open the index.html file in your browser. You should see a web page similar to the following screenshot:

Upload a few JPEG image files and refresh the page a few times to see the object detection demo run; the demo will select a different image that's stored in your S3 bucket each time it runs. The ObjectDetector application is not as fancy as the Amazon Rekognition demo, but pat yourself on the back for creating a well-architected AI application!
The local HTTP server will run continuously unless you explicitly stop it. To stop the local HTTP server, go to the Terminal window that's running chalice local and press Ctrl + C.
The final project structure for the ObjectDetector application should look as follows:
Project Organization
------------
├── ObjectDetector/
├── Capabilities/
├── .chalice/
├── config.json
├── chalicelib/
├── __init__.py
├── recognition_service.py
├── storage_service.py
├── app.py
├── requirements.txt
├── Website/
├── index.html
├── script.js
├── Pipfile
├── Pipfile.lock
It's now time to make our AI application public and deploy it to the AWS cloud.