官术网_书友最值得收藏!

Infrastructure layer – network, nodes, and storage

At the infrastructure layer, the focus is on the operational network, computational nodes, and backend storage.

When defining your infrastructure layer, it is a really good idea to consult the Docker-certified infrastructure documentation on Docker's website, https://success.docker.com/ (certified architecture) . There are specific reference architecture guides available for VMware, AWS, and Azure. These guides provide key insights for the operations team planning and designing their Docker Enterprise infrastructure layer.

The operational network is primarily concerned with ingress, egress, and inter-node data flow with the appropriate isolation and encryption of Docker cluster nodes. Carving out address space and defining a network security group policy are usually the center of attention. While this is generally pretty straightforward, the Docker reference architecture covers important details. The following are globally a couple of key considerations to highlight:

  • Consideration 1: While there is some conflicting documentation on the topic, it is a good idea to pin your manager nodes to fixed IP addresses.
  • Consideration 2: Make sure Docker's overlay network spaces does not overlap with other address space on your network. There are some new parameters starting in Docker 18.09 for initiating Docker Enterprise's underlying Swarm cluster to carve out safe network CIDR blocks for Docker's overlay networking to use:
docker swarm init --default-addr-pool 10.85.0.0/16 --default-addr-pool 10.91.0.0/16 --default-addr-pool-mask-length 25

For a complete introduction to Docker networking, please read the https://success.docker.com/ article on networking.

Computational nodes are the VMs or bare-metal nodes running a Docker Enterprise engine on top of a supported OS. The Docker Enterprise engine is currently supported on CentOS, Oracle Linux, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, Microsoft Windows Server 2016, Microsoft Windows Server 1709, and Microsoft Windows Server 1803. The key concerns for setting up a computational node are CPU, RAM, local storage, and cluster storage endpoints.

While there are no magic formulas to predict perfect-sized nodes, there are some pretty safe bets, as used for planning purposes in the previous cost section. Generally, it is a safe bet to go with four cores and 16 GB of RAM for managers and Docker Trusted Registry (DTR) nodes. Again, worker nodes will depend on the types of workload you are running on them. Containerized Java applications often need a large memory footprint and maybe a four-core and 32 GB of RAM worker makes sense. Most organizations have stats on applications they are currently running and can be used for estimation purposes. Do not get too hung up on sizing workers— they are your cattle and as such can be easily replaced.

Another computational node consideration is storage. There are three considerations related to storage:

  • Backing filesystems for Docker image storage drivers: When the Docker Engine is installed (in the platform layer), you need an image storage driver to efficiently implement a layered, copy-on-write filesystem. These drivers require a compatible backing filesystem. Most modern Linux systems will automatically use the Overlay2 storage driver backed by ext4 filesystems. Older versions of CentOS and RHEL (7.3 and earlier) generally use the devicemapper storage driver backed by a direct-lvm filesystem (do not run production workloads direct-lvm in loop-back mode). On SUSE, use the btrfs driver and filesystem.
  • Local storage for node-specific volumes: Docker volumes tied to specific nodes can be handy when you have specialized nodes in your cluster. These nodes are then labeled so that containers can be deployed specifically to these nodes. This ensures that any volumes are consistently available on these nodes and comes in handy for container-based centralized build servers to store plugins and workspaces. Please remember these volumes should be added to your backup list!
  • Cluster-based storage: When nodes mount remote storage endpoints using something like NFS, you can then allow containers to mount these mount points as volumes to access remote storage from within a container. This is common for older on-premise deployments, but newer on-premise installations might consider installing NFS on the host and using the local volume opt: nfs, or they may consider using a third-party volume plugin that ties into your storage vendor's infrastructure for more flexibility and stability.

Please note, NFS storage generally works well for on-premise installations where you have full control over the network, but in the cloud, NFS mounts may be less reliable due to latency from noisy neighbors.

Only use NFS for on-premise implementations with predictably low latency. When running Docker Enterprise in the cloud, consider using something such as CloudStor or RexRay to avoid NFS issues related to sudden spikes in network latency. Such spikes can cause NFS to silently switch to read-only mode, resulting in cascading application failures.

Finally, here are two considerations for computational nodes being prepared to run as manager nodes:

  • These nodes should have fixed IPs. There is some conflicting advice as to whether or not Docker Enterprise reconcile processing compensates for manager IP changes or not. While dynamic IPs are fine for worker nodes, when it comes to manager nodes, play it safe and use fixed IP addresses.
  • They must be backed up regularly, including /var/lib/docker/swarm.

More about backups later in getting ready for production, but remember your manager nodes are pets and you might need to restore one from backups some day!

主站蜘蛛池模板: 浙江省| 城市| 武夷山市| 民丰县| 常宁市| 滦平县| 台安县| 叙永县| 玛沁县| 刚察县| 万州区| 马尔康县| 海兴县| 佳木斯市| 禄丰县| 莫力| 桃园县| 榕江县| 九龙县| 洛宁县| 万安县| 仙桃市| 遂昌县| 千阳县| 大悟县| 上蔡县| 苍南县| 玛纳斯县| 鹤壁市| 历史| 崇州市| 绥中县| 禄丰县| 固安县| 桐柏县| 华池县| 黔西县| 扎鲁特旗| 剑河县| 彰武县| 寿宁县|