- OpenStack for Architects
- Ben Silverman Michael Solberg
- 557字
- 2021-06-25 21:24:33
Considerations for performance-intensive workloads
The guidelines given earlier work well for development, web, or database workloads. Some workloads, particularly Network Function Virtualization (NFV) workloads, have very specific performance requirements that need to be addressed in the hardware selection process. A few improvements to the scheduling of instances have been added to the Nova compute service in order to enable these workloads.
The first improvement allows for passing through a PCI device directly from the hardware into the instance. In standard OpenStack Neutron networking, packets traverse a set of bridges between the instance's virtual interface and the actual network interface. The amount of overhead in this virtual networking is significant, as each packet consumes CPU resources each time it traverses an interface or switch. This performance overhead can be eliminated by allowing the virtual machine to have direct access to the network device via a technology named SR-IOV. SR-IOV allows a single network adapter to appear as multiple network adapters to the operating system. Each of these virtual network adapters (referred to as virtual functions or VFs) can be directly associated with a virtual instance.
The second improvement allows the Nova scheduler to specify the CPU and memory zone for an instance on a compute node that has Non-Uniform Memory Access (NUMA). In these NUMA systems, a certain amount of memory is located closer to a certain set of processor cores. A performance penalty occurs when processes access memory pages in a region that is nonadjacent. Another significant performance penalty occurs when processes move from one memory zone to another. To get around these performance penalties, the Nova scheduler has the ability to pin the virtual CPUs of an instance to physical cores in the underlying compute node. It also has the ability to restrict a virtual instance to a given memory region associated with those virtual CPUs, effectively constraining the instance to a specified NUMA zone.
The last major performance improvement in the Nova Compute service is around memory page allocation. By default, the Linux operating system allocates memory in 4 kilobyte pages on 64-bit Intel systems. While this makes a lot of sense for traditional workloads (it maps to the size of a typical filesystem block), it can have an adverse effect on memory allocation performance in virtual machines. The Linux operating system also allows 2 megabyte- and 1 gigabyte-sized memory pages, commonly referred to as huge pages. The Kilo release of OpenStack included support for using huge pages to back virtual instances.
The combination of PCI-passthrough, CPU and memory pinning, and huge page support allows for dramatic performance improvements for virtual instances in OpenStack, and they are required for workloads such as NFV. They have some implications for hardware selection that are worth noting, though. Typical NFV instances will expect to have an entire NUMA zone dedicated to them. As such, these are typically very large flavors and the flavors tend to be application-specific. They're also hardware-specific—if your flavor specifies that the instance needs 16 virtual CPUs and 32 gigabytes of memory, then the hardware needs to have a NUMA zone with 16 physical cores and 32 gigabytes of memory available. Also, if that NUMA zone has an addition 32 gigabytes of memory configured, it will be unavailable to the rest of the system as the instance has exclusive access to that zone.
- Mastercam 2017數控加工自動編程經典實例(第4版)
- 人工免疫算法改進及其應用
- 數據庫原理與應用技術
- JSF2和RichFaces4使用指南
- 自動控制理論(非自動化專業)
- Implementing AWS:Design,Build,and Manage your Infrastructure
- 在實戰中成長:C++開發之路
- Excel 2007終極技巧金典
- 設計模式
- 計算機硬件技術基礎(第2版)
- Embedded Linux Development using Yocto Projects(Second Edition)
- 中老年人學電腦與上網
- 網頁設計與制作
- Java Deep Learning Projects
- Mastering Android Game Development with Unity