官术网_书友最值得收藏!

  • OpenStack for Architects
  • Ben Silverman Michael Solberg
  • 791字
  • 2021-06-25 21:24:33

Sizing the hardware to match the workload

As we mentioned at the start of this chapter, while an iterative approach for developing and deploying software is a best practice, an iterative approach for purchasing hardware has the potential to kill your private cloud initiative. It's crucial to get the hardware right at the first time so that you can line up the appropriate discounts from your hardware vendor and set the appropriate budget at the start of the project. Sizing is often difficult for these projects though, as it can be difficult to anticipate the requirements of the workload ahead of time. This is one of the reasons that it's so important to get the application owners involved early in the project—they'll be able to anticipate what their application requirements will be, and that will inform the sizing process.

The first step in the sizing process is to define standard instance sizes. These are referred to as flavors in the Nova parlance. Out of the box, Nova ships with a set of "m1" flavors, which correspond roughly to Amazon EC2 instance types. A flavor has three prominent parameters—the number of virtual CPUs, the amount of memory, and the amount of ephemeral disk storage. Organizations typically define two or three flavors, based on the anticipated workload. Three common flavors are the 1×2, 2×4, and 4×8 sizes, which refer to the number of vCPUs and gigabytes of memory. The ephemeral disk size is typically the same, regardless of CPU and memory configuration—it should equate to the expected size of a root disk of the organization's standard Glance images. For example, a 2×4 with the Red Hat Enterprise Linux 7 qcow2 image would have two virtual CPUs, 4 gigabytes of memory, and a 20 gigabyte ephemeral disk.

Once the flavors have been defined, the next step is to determine the acceptable overcommit ratio for a given environment. Each of the major hardware vendors publishes virtual performance benchmark results that can be used as a starting point for this ratio. These benchmarks are available at http://spec.org/ in the virtualization category. Working through these benchmarks, two simple rules emerge: never overcommit memory and always overcommit CPU up to 10 times. Following these rules allows you to determine the optimal amount of memory in a given piece of compute hardware. For example, a compute node with 36 physical cores can support up to 360 virtual cores of compute. If we use the preceding flavors, we'll see a ratio of 2 gigabytes of RAM to each virtual core. The optimum amount of memory in this compute node would top out somewhere around 720 GB (2 GB x 10 x 36 cores).

There's typically a dramatic price difference in tiers of memory, and it often makes sense to configure the system with less than the optimum amount of memory. Let's assume that it is only economical to configure our 36 core compute node with 512 GB of memory. The next two items to consider are network bandwidth and available ephemeral storage. This is where it helps to understand the target workload. 512 GB of memory would support 256 1×2 instances, 128 2×4 instances, or 64 4×8 instances. That gives us a maximum ephemeral disk requirement between 1.2 and 5 TB, assuming a 20 GB Glance image. That's a pretty large discrepancy. If we feel confident that the bulk of our instances will be 2x4, we can size for around 128 instances, which gives us a requirement of 2.5 TB of disk space for ephemeral storage. There's some leeway in there as well - ephemeral storage is thin-provisioned with the KVM hypervisor and it's unlikely that we'll consume the full capacity. However, if we use persistent storage options such as Ceph, SAN, iSCSI, or NFS for image, instance, or object storage, planning for capacity becomes more complicated. Even though you're determining your flavor CPU, memory, and ephemeral disk sizing, you will need to include persistent disk space for root volumes. If these volumes are stored on the same appliances/clusters as your glance and object storage, great care must be taken to ensure consistent elasticity.

The last major item to consider when selecting compute hardware is the available bandwidth on the compute node. Most of the systems we work with today have two 10 gigabit bonded interfaces dedicated to instance traffic. Dividing the available bandwidth by the number of anticipated instances allows us to determine whether an additional set of interfaces is required. Dividing 10 gigabits by 128 instances gives us roughly 75 megabits of available average bandwidth for each instance. This is more than sufficient for most web and database workloads.

主站蜘蛛池模板: 台中市| 乐业县| 东乡县| 西青区| 江华| 来安县| 高密市| 独山县| 腾冲县| 环江| 丁青县| 家居| 澎湖县| 西乌| 衡东县| 大埔县| 武山县| 镇雄县| 开阳县| 修水县| 云南省| 荔浦县| 黄平县| 威海市| 习水县| 台江县| 张北县| 高州市| 山丹县| 饶阳县| 泰州市| 青海省| 永胜县| 女性| 建昌县| 东港市| 漾濞| 扎赉特旗| 冕宁县| 加查县| 南乐县|