- Mastering Docker Enterprise
- Mark Panthofer
- 530字
- 2021-07-02 12:30:06
Docker Enterprise operation architecture – infrastructure, platform, and application layers
Framing up an operationally-oriented architectural perspective helps to describe the constituent user's point of view. The Docker Enterprise platform achieves both efficiency and security through a separation of concerns by isolating application developers from the infrastructure using platform-level abstractions such as services, networks, volumes, configurations, and secrets. Meanwhile, the actual implementation of these platform abstractions and their underlying infrastructure is managed by a small group of highly skilled operations folks. This approach allows application development work, including DevOps activities, to build and deploy applications using platform-agnostic APIs (Docker and Kubernetes CLIs).
Docker Enterprise's platform separation drives efficiency, security, and innovation! Using a small operations team to back Docker Enterprise platform abstractions with infrastructure-appropriate implementations (secure and efficient best practices for on-premise or cloud provider platforms using Docker plugins) enables all containerized application teams to access these abstractions through a deployment .yaml file. The development team does not care about where the application is deployed as long as the abstractions are implemented correctly. This gives the application powerful tools for mass innovation (Solomon Hykes' dream realized), while a small operations team keeps things secure and running on the underlying infrastructure.
Figure 2 describes platform separation layers in action. First, the operations team installs and configures the infrastructure using Docker-certified infrastructure guidelines. This includes preparing the host OS, configuring (NFS) storage, installing the Docker Enterprise Engine (image storage drivers and plugins), installing Docker UCP, and installing DTR. In our example, the Ops team configures the Docker Enterprise Engine for central logging and installs the plugin for NFS storage.
Then, the platform team (can be an operations function or a specialized group of Docker Enterprise operators trained on platform configuration, operations, support, and maintenance) configures cluster access with RBAC so users can deploy their stack using the appropriate cluster resources. Finally, a developer/DevOps team member uses a Docker Enterprise CLI bundle to deploy a stack of containers into the cluster using a docker stack deploy command with the ApplicationStack.yml file. The containers are scheduled across the cluster using the platform abstractions for services, networking, and volumes.
In this case, the application is deployed across two worker nodes as shown below, connected by the My-2-tier network and has access to external data stored on an NFS mount point. Additionally, the ApplicationStack.yml file can describe how the application is externally exposed using layer-7 routing and can make the application immediately live. Ultimately, the application can be completely deployed without any intervention from the infrastructure/operations team:

- Unreal Engine:Game Development from A to Z
- 大學計算機信息技術導論
- 樂高機器人:WeDo編程與搭建指南
- Visualforce Development Cookbook(Second Edition)
- Seven NoSQL Databases in a Week
- Learning Apache Spark 2
- 機器學習流水線實戰
- 水晶石精粹:3ds max & ZBrush三維數字靜幀藝術
- 新手學電腦快速入門
- 大數據驅動的設備健康預測及維護決策優化
- 基于單片機的嵌入式工程開發詳解
- Ruby on Rails敏捷開發最佳實踐
- 基于企業網站的顧客感知服務質量評價理論模型與實證研究
- 水晶石影視動畫精粹:After Effects & Nuke 影視后期合成
- Windows安全指南