- Mastering Docker Enterprise
- Mark Panthofer
- 866字
- 2021-07-02 12:30:07
Infrastructure layer – network, nodes, and storage
At the infrastructure layer, the focus is on the operational network, computational nodes, and backend storage.
The operational network is primarily concerned with ingress, egress, and inter-node data flow with the appropriate isolation and encryption of Docker cluster nodes. Carving out address space and defining a network security group policy are usually the center of attention. While this is generally pretty straightforward, the Docker reference architecture covers important details. The following are globally a couple of key considerations to highlight:
- Consideration 1: While there is some conflicting documentation on the topic, it is a good idea to pin your manager nodes to fixed IP addresses.
- Consideration 2: Make sure Docker's overlay network spaces does not overlap with other address space on your network. There are some new parameters starting in Docker 18.09 for initiating Docker Enterprise's underlying Swarm cluster to carve out safe network CIDR blocks for Docker's overlay networking to use:
docker swarm init --default-addr-pool 10.85.0.0/16 --default-addr-pool 10.91.0.0/16 --default-addr-pool-mask-length 25
For a complete introduction to Docker networking, please read the https://success.docker.com/ article on networking.
Computational nodes are the VMs or bare-metal nodes running a Docker Enterprise engine on top of a supported OS. The Docker Enterprise engine is currently supported on CentOS, Oracle Linux, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, Microsoft Windows Server 2016, Microsoft Windows Server 1709, and Microsoft Windows Server 1803. The key concerns for setting up a computational node are CPU, RAM, local storage, and cluster storage endpoints.
While there are no magic formulas to predict perfect-sized nodes, there are some pretty safe bets, as used for planning purposes in the previous cost section. Generally, it is a safe bet to go with four cores and 16 GB of RAM for managers and Docker Trusted Registry (DTR) nodes. Again, worker nodes will depend on the types of workload you are running on them. Containerized Java applications often need a large memory footprint and maybe a four-core and 32 GB of RAM worker makes sense. Most organizations have stats on applications they are currently running and can be used for estimation purposes. Do not get too hung up on sizing workers— they are your cattle and as such can be easily replaced.
Another computational node consideration is storage. There are three considerations related to storage:
- Backing filesystems for Docker image storage drivers: When the Docker Engine is installed (in the platform layer), you need an image storage driver to efficiently implement a layered, copy-on-write filesystem. These drivers require a compatible backing filesystem. Most modern Linux systems will automatically use the Overlay2 storage driver backed by ext4 filesystems. Older versions of CentOS and RHEL (7.3 and earlier) generally use the devicemapper storage driver backed by a direct-lvm filesystem (do not run production workloads direct-lvm in loop-back mode). On SUSE, use the btrfs driver and filesystem.
- Local storage for node-specific volumes: Docker volumes tied to specific nodes can be handy when you have specialized nodes in your cluster. These nodes are then labeled so that containers can be deployed specifically to these nodes. This ensures that any volumes are consistently available on these nodes and comes in handy for container-based centralized build servers to store plugins and workspaces. Please remember these volumes should be added to your backup list!
- Cluster-based storage: When nodes mount remote storage endpoints using something like NFS, you can then allow containers to mount these mount points as volumes to access remote storage from within a container. This is common for older on-premise deployments, but newer on-premise installations might consider installing NFS on the host and using the local volume opt: nfs, or they may consider using a third-party volume plugin that ties into your storage vendor's infrastructure for more flexibility and stability.
Please note, NFS storage generally works well for on-premise installations where you have full control over the network, but in the cloud, NFS mounts may be less reliable due to latency from noisy neighbors.
Finally, here are two considerations for computational nodes being prepared to run as manager nodes:
- These nodes should have fixed IPs. There is some conflicting advice as to whether or not Docker Enterprise reconcile processing compensates for manager IP changes or not. While dynamic IPs are fine for worker nodes, when it comes to manager nodes, play it safe and use fixed IP addresses.
- They must be backed up regularly, including /var/lib/docker/swarm.
More about backups later in getting ready for production, but remember your manager nodes are pets and you might need to restore one from backups some day!
- 集成架構中型系統
- Mastering Proxmox(Third Edition)
- 空間機器人遙操作系統及控制
- 數據運營之路:掘金數據化時代
- Photoshop CS3圖層、通道、蒙版深度剖析寶典
- 塊數據5.0:數據社會學的理論與方法
- Learning C for Arduino
- OpenStack Cloud Computing Cookbook
- LAMP網站開發黃金組合Linux+Apache+MySQL+PHP
- Hands-On Reactive Programming with Reactor
- Linux系統管理員工具集
- 菜鳥起飛電腦組裝·維護與故障排查
- Arduino創意機器人入門:基于Mixly
- ARM? Cortex? M4 Cookbook
- C# 2.0實例自學手冊