- OpenStack for Architects
- Ben Silverman Michael Solberg
- 515字
- 2021-06-25 21:24:34
Physical network design
No discussion of OpenStack networking would be complete without the mention of the spine-leaf physical network architecture. Spine-leaf is an alternative to the traditional multitier network architecture, which is made up of a set of core, aggregation, and access layers. Spine-leaf introduces a modification to the design of the aggregation layer in order to reduce the number of hops between servers that are attached to different access switches. The aggregation layer becomes a spine in which every access switch (the leaf) can reach every other access switch through a single spine switch. This architecture is often considered a prerequisite for horizontally scalable cloud workloads, which are focused more on traffic between instances (east-west traffic) than traffic between instances and the internet (north-south traffic).
The primary impact that spine-leaf network design has on OpenStack deployments is that layer 2 networks are typically terminated at the spine—meaning that subnets cannot stretch from leaf to leaf. This has a couple of implications. First, virtual IPs cannot migrate from leaf to leaf and thus the external network is constrained to a single leaf. If the leaf boundary is the top-of-rack switch, this places all the load balancers for a given control plane within a single failure zone (the rack). Secondly, provider networks need to be physically attached to each compute node within an OpenStack region if instances are going to be directly attached to them. This limitation can constrain an OpenStack region to the size of a leaf. Once again, if the leaf boundary is the top-of-rack switch, this makes for very small regions, which lead to an unusually high ratio of control to compute nodes.
We've seen a couple of different approaches on how to implement spine-leaf within OpenStack installations given these limitations. The first is to simply stretch L2 networks across the leaves in a given deployment. The only networks which require stretching are the external API network and the provider networks. If instances are not going to be directly attached to the provider networks (that is, if floating IPs are used for external connectivity), then these networks only need to be stretched across a failure zone to ensure that the loss of a single rack doesn't bring down the control plane. Deployments that chose to stretch L2 across racks typically group racks into pods of three or more racks, which then become the leaf boundary. The second approach that we've seen used is to create tunnels within the spine, which simulate stretched L2 subnets across the top of rack switches. Either way, collaboration between the network architecture team and the cloud architecture team should lead to a solution that is supportable by the organization.
The concept of availability zones was introduced to Neutron in the Mitaka release of OpenStack (https://blueprints.launchpad.net/neutron/+spec/add-availability-zone). Availability zones allow the OpenStack administrator to expose leaf boundaries to the Nova scheduler, which allows Nova to decide where to place workloads based on the network topology. Although not in wide use yet, this feature will provide much more flexibility for OpenStack architects when deploying with a spine-leaf network architecture.
- 工業(yè)機(jī)器人技術(shù)及應(yīng)用
- 數(shù)據(jù)挖掘?qū)嵱冒咐治?/a>
- 返璞歸真:UNIX技術(shù)內(nèi)幕
- Implementing AWS:Design,Build,and Manage your Infrastructure
- 基于單片機(jī)的嵌入式工程開發(fā)詳解
- Mastering Game Development with Unreal Engine 4(Second Edition)
- Excel 2007常見技法與行業(yè)應(yīng)用實(shí)例精講
- Azure PowerShell Quick Start Guide
- Salesforce Advanced Administrator Certification Guide
- 人工智能技術(shù)入門
- 從零開始學(xué)JavaScript
- 智能制造系統(tǒng)及關(guān)鍵使能技術(shù)
- 未來學(xué)徒:讀懂人工智能飛馳時(shí)代
- Data Analysis with R(Second Edition)
- 菜鳥起飛電腦組裝·維護(hù)與故障排查