官术网_书友最值得收藏!

Deploying the PoC application to the Docker Enterprise cluster

Now that you are connected to the Docker cluster through the command-line interface bundle, it is time to deploy your application. Up until now, we've deployed the containers on your development or build machine to make sure the application is working properly. Then, we pushed the images out to the DTR so they will be available to our orchestrator when we deploy our stack to the cluster. 

Up until now, we've created our docker-compose.yml file and run it with the docker-compose up command to get our stack of containers (described as services in the YAML file) running our single/local Docker host. Now, we want to run that same stack of containers on our Docker Enterprise cluster. Docker makes it easy by allowing us to use the familiar docker-compose.yml file format to deploy to a Docker Enterprise (or an Swarm) cluster using the docker stack deploy command. However, there are some things we need to do in order to get this cluster ready. For instance, it is really important to note how the image names must use a fully qualified path to your trusted registries repository. Otherwise, the orchestrator will be unable to pull the images from the cluster nodes and the deployment will fail.

First, let's rename the docker-compose file to something like stack.yml. This will help avoid confusion when we are trying to determine which file to use for a cluster deployment. Since the Docker Swarm command for deployment is docker stack deploy, stack should be a pretty logical choice and one we often see used.

docker-compose files can be used for Swarm deployments with the docker stack deploy command, but requires version 3.0 or newer (declared at the top of the file). Notice how we are using version 3.3, which will allow us to use to add the deploy section to our stack file.

Looking at the stack.yml file, we will walk through the changes to the file that are needed as we upgrade from a single-node docker-compose file to a Swarm cluster file. First, notice the image uses a fully qualified DTR path to your PoC registry. Second, notice the addition of the deploy section in the file. Here, we are using DNS Round Robin (DNSRR) endpoint mode because we are using Windows Server 2016 where VIP (Virtual IP, the preferred Docker built-in load balancing approach) is not available. Look for VIP on Windows in Windows Server 2019.

In conjunction with a DNSRR endpoint, we must use host mode for publishing IP ports for external traffic. This means the port is only published on the host adapter where the signup-app container is running. For us, that means we will have to point all incoming traffic to our Windows worker node on port 8000. While this may seem a little bit limiting, it's a fairly common setup for some of the old-school Docker implementations.

Finally, notice the slight change to our network. Previously, we had it as an external network when we were running on a single Windows 2016 node. This was primarily due to some limitations with Docker running on a single node with Windows Server 2016, where you are limited to a preconfigured nat network to connect containers. So, to access the preexisting network, we used the external network. Now, however, we are running in Swarm mode and Swarm provides overlay networking, allowing our containers to communicate across multiple nodes in the cluster. The default implementation of Docker overlay networking is VXLAN behind the scenes:

# stack.yml file 
version: '3.3'

services:

signup-db:
image: {insert-your-DTR-URL-here}/{user-name or org-name}/db-image:v1
networks:
- app-neto
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.platform.os==windows

signup-app:
image: {insert-your-DTR-URL-here}/{user-name or org-name}/app-image:v1
ports:
- mode: host
target: 80
published: 8000
depends_on:
- signup-db
networks:
- app-neto
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.platform.os==windows

networks:
app-neto:
driver: overlay

For more information on creating docker-compose files, please check out the Docker docs manual: https://docs.docker.com/compose/compose-file/.

Now, to run the application in the cluster, you need to do the following:

# Deploy the test stack
docker stack deploy -c .\stack.yml.txt test

# Look at the services
docker service ls
ID NAME MODE REPLICAS IMAGE
paoo6ej3ubcz test_signup-app replicated 1/1 ec2-xx-xx-x-xx.us-west-2.compute.amazonaws.com/app-dev/app-image:v1
w32lxzt14khc test_signup-db replicated 1/1 ec2-xx-xx-x-xx.us-west-2.compute.amazonaws.com/app-dev/db-image:v1

To test the application, point your browser at the Windows worker node's external IP address on port 8000. You might see this:

Figure 18: PoC Application Error Screen

This means signup-app came up before the database was ready. You see, the depends_on parameter in the stack.yml file is only reliable with docker-compose and not docker stack deploy. Furthermore, depends_on only means the dependent process's PID 1 is up and running and not that the database is completely initialized and ready. While that's certainly something we would want to attend to in the pilot phase, we can manually tweak it for the PoC. For our tweak, we will restart the application service, and the easiest way to do it is to scale down the signup-app service to 0 replicas and then back up to 1 replica. Here are the commands for doing it:

docker service scale test_signup-app=0
...
docker service scale test_signup-app=1

Now, we can return to our browser and refresh the page. Now, we should see the Dockercon newsletter site.

主站蜘蛛池模板: 曲阳县| 临城县| 林西县| 湖州市| 福州市| 邳州市| 新昌县| 客服| 收藏| 洱源县| 尼玛县| 淅川县| 龙江县| 沁水县| 轮台县| 兴业县| 博客| 上思县| 唐海县| 泗阳县| 东乡| 海口市| 新营市| 南安市| 崇明县| 阳东县| 河津市| 望江县| 湖北省| 和平县| 开江县| 屯门区| 哈尔滨市| 鹤峰县| 盐亭县| 万山特区| 砚山县| 临泉县| 万年县| 嘉禾县| 仙桃市|