官术网_书友最值得收藏!

Docker Enterprise operation architecture – infrastructure, platform, and application layers

Framing up an operationally-oriented architectural perspective helps to describe the constituent user's point of view. The Docker Enterprise platform achieves both efficiency and security through a separation of concerns by isolating application developers from the infrastructure using platform-level abstractions such as services, networks, volumes, configurations, and secrets. Meanwhile, the actual implementation of these platform abstractions and their underlying infrastructure is managed by a small group of highly skilled operations folks. This approach allows application development work, including DevOps activities, to build and deploy applications using platform-agnostic APIs (Docker and Kubernetes CLIs).

Docker Enterprise's platform separation drives efficiency, security, and innovation! Using a small operations team to back Docker Enterprise platform abstractions with infrastructure-appropriate implementations (secure and efficient best practices for on-premise or cloud provider platforms using Docker plugins) enables all containerized application teams to access these abstractions through a deployment .yaml file. The development team does not care about where the application is deployed as long as the abstractions are implemented correctly. This gives the application powerful tools for mass innovation (Solomon Hykes' dream realized), while a small operations team keeps things secure and running on the underlying infrastructure.

Infrastructure skills for AWS, Azure, GCE, and VMware are hard to find! Docker Enterprise's platform separation allows an enterprise to leverage a relatively small team of infrastructure experts across a large number of application teams. Additionally, platform separation enables a DevOps shift left, empowering developers to describe their application service stack deployment using platform-neutral constructs.

Figure 2 describes platform separation layers in action. First, the operations team installs and configures the infrastructure using Docker-certified infrastructure guidelines. This includes preparing the host OS, configuring (NFS) storage, installing the Docker Enterprise Engine (image storage drivers and plugins), installing Docker UCP, and installing DTR. In our example, the Ops team configures the Docker Enterprise Engine for central logging and installs the plugin for NFS storage.

Then, the platform team (can be an operations function or a specialized group of Docker Enterprise operators trained on platform configuration, operations, support, and maintenance) configures cluster access with RBAC so users can deploy their stack using the appropriate cluster resources. Finally, a developer/DevOps team member uses a Docker Enterprise CLI bundle to deploy a stack of containers into the cluster using a docker stack deploy command with the ApplicationStack.yml file. The containers are scheduled across the cluster using the platform abstractions for services, networking, and volumes.

Normally, this deployment process to the cluster is handled by a CI/CD system such as Jenkins, GitLab, or Azure DevOps. The CI system user has its own UCP RBAC user account + certificate for accessing the cluster, managing DTR images, and signing images it built before pushing to DTR.

In this case, the application is deployed across two worker nodes as shown below, connected by the My-2-tier network and has access to external data stored on an NFS mount point. Additionally, the ApplicationStack.yml file can describe how the application is externally exposed using layer-7 routing and can make the application immediately live. Ultimately, the application can be completely deployed without any intervention from the infrastructure/operations team:

Figure 2 :Service Stacks on Swarm
主站蜘蛛池模板: 宜城市| 洪江市| 南宁市| 赣榆县| 泌阳县| 德阳市| 海阳市| 赤壁市| 屯留县| 茂名市| 彭山县| 枣阳市| 贵溪市| 阿拉善盟| 安康市| 定陶县| 乌拉特后旗| 阿荣旗| 蒙阴县| 迁西县| 高淳县| 商水县| 富顺县| 大足县| 通江县| 曲靖市| 天镇县| 沂南县| 那坡县| 布拖县| 丹巴县| 石渠县| 西贡区| 常熟市| 登封市| 尼勒克县| 策勒县| 西乡县| 穆棱市| 阳高县| 玉树县|