官术网_书友最值得收藏!

Physical network design

No discussion of OpenStack networking would be complete without the mention of the spine-leaf physical network architecture. Spine-leaf is an alternative to the traditional multitier network architecture, which is made up of a set of core, aggregation, and access layers. Spine-leaf introduces a modification to the design of the aggregation layer in order to reduce the number of hops between servers that are attached to different access switches. The aggregation layer becomes a spine in which every access switch (the leaf) can reach every other access switch through a single spine switch. This architecture is often considered a prerequisite for horizontally scalable cloud workloads, which are focused more on traffic between instances (east-west traffic) than traffic between instances and the internet (north-south traffic).

The primary impact that spine-leaf network design has on OpenStack deployments is that layer 2 networks are typically terminated at the spine—meaning that subnets cannot stretch from leaf to leaf. This has a couple of implications. First, virtual IPs cannot migrate from leaf to leaf and thus the external network is constrained to a single leaf. If the leaf boundary is the top-of-rack switch, this places all the load balancers for a given control plane within a single failure zone (the rack). Secondly, provider networks need to be physically attached to each compute node within an OpenStack region if instances are going to be directly attached to them. This limitation can constrain an OpenStack region to the size of a leaf. Once again, if the leaf boundary is the top-of-rack switch, this makes for very small regions, which lead to an unusually high ratio of control to compute nodes.

We've seen a couple of different approaches on how to implement spine-leaf within OpenStack installations given these limitations. The first is to simply stretch L2 networks across the leaves in a given deployment. The only networks which require stretching are the external API network and the provider networks. If instances are not going to be directly attached to the provider networks (that is, if floating IPs are used for external connectivity), then these networks only need to be stretched across a failure zone to ensure that the loss of a single rack doesn't bring down the control plane. Deployments that chose to stretch L2 across racks typically group racks into pods of three or more racks, which then become the leaf boundary. The second approach that we've seen used is to create tunnels within the spine, which simulate stretched L2 subnets across the top of rack switches. Either way, collaboration between the network architecture team and the cloud architecture team should lead to a solution that is supportable by the organization.

The concept of availability zones was introduced to Neutron in the Mitaka release of OpenStack (https://blueprints.launchpad.net/neutron/+spec/add-availability-zone). Availability zones allow the OpenStack administrator to expose leaf boundaries to the Nova scheduler, which allows Nova to decide where to place workloads based on the network topology. Although not in wide use yet, this feature will provide much more flexibility for OpenStack architects when deploying with a spine-leaf network architecture.

主站蜘蛛池模板: 黑龙江省| 梁平县| 新巴尔虎右旗| 江安县| 克拉玛依市| 宜黄县| 革吉县| 巨鹿县| 闸北区| 巴林右旗| 休宁县| 湘潭县| 柳江县| 雷州市| 广丰县| 岱山县| 阜南县| 林州市| 建瓯市| 江都市| 洮南市| 壤塘县| 芜湖县| 体育| 海晏县| 平果县| 承德市| 勐海县| 大竹县| 临汾市| 安康市| 齐齐哈尔市| 离岛区| 铜鼓县| 呼伦贝尔市| 柯坪县| 资源县| 滨海县| 罗田县| 德阳市| 西贡区|