- Mastering Docker Enterprise
- Mark Panthofer
- 530字
- 2021-07-02 12:30:06
Docker Enterprise operation architecture – infrastructure, platform, and application layers
Framing up an operationally-oriented architectural perspective helps to describe the constituent user's point of view. The Docker Enterprise platform achieves both efficiency and security through a separation of concerns by isolating application developers from the infrastructure using platform-level abstractions such as services, networks, volumes, configurations, and secrets. Meanwhile, the actual implementation of these platform abstractions and their underlying infrastructure is managed by a small group of highly skilled operations folks. This approach allows application development work, including DevOps activities, to build and deploy applications using platform-agnostic APIs (Docker and Kubernetes CLIs).
Docker Enterprise's platform separation drives efficiency, security, and innovation! Using a small operations team to back Docker Enterprise platform abstractions with infrastructure-appropriate implementations (secure and efficient best practices for on-premise or cloud provider platforms using Docker plugins) enables all containerized application teams to access these abstractions through a deployment .yaml file. The development team does not care about where the application is deployed as long as the abstractions are implemented correctly. This gives the application powerful tools for mass innovation (Solomon Hykes' dream realized), while a small operations team keeps things secure and running on the underlying infrastructure.
Figure 2 describes platform separation layers in action. First, the operations team installs and configures the infrastructure using Docker-certified infrastructure guidelines. This includes preparing the host OS, configuring (NFS) storage, installing the Docker Enterprise Engine (image storage drivers and plugins), installing Docker UCP, and installing DTR. In our example, the Ops team configures the Docker Enterprise Engine for central logging and installs the plugin for NFS storage.
Then, the platform team (can be an operations function or a specialized group of Docker Enterprise operators trained on platform configuration, operations, support, and maintenance) configures cluster access with RBAC so users can deploy their stack using the appropriate cluster resources. Finally, a developer/DevOps team member uses a Docker Enterprise CLI bundle to deploy a stack of containers into the cluster using a docker stack deploy command with the ApplicationStack.yml file. The containers are scheduled across the cluster using the platform abstractions for services, networking, and volumes.
In this case, the application is deployed across two worker nodes as shown below, connected by the My-2-tier network and has access to external data stored on an NFS mount point. Additionally, the ApplicationStack.yml file can describe how the application is externally exposed using layer-7 routing and can make the application immediately live. Ultimately, the application can be completely deployed without any intervention from the infrastructure/operations team:

- Dreamweaver CS3+Flash CS3+Fireworks CS3創意網站構建實例詳解
- Spark編程基礎(Scala版)
- 模型制作
- 工業機器人入門實用教程(KUKA機器人)
- Supervised Machine Learning with Python
- Apache Spark Deep Learning Cookbook
- 數據掘金
- 單片機原理實用教程
- AVR單片機工程師是怎樣煉成的
- Red Hat Enterprise Linux 5.0服務器構建與故障排除
- Flash CS5二維動畫設計與制作
- PVCBOT零基礎機器人制作(第2版)
- Arduino創意機器人入門:基于Mixly
- ORACLE數據庫技術實用詳解
- 巧學活用電腦維護108問