- OpenStack for Architects
- Ben Silverman Michael Solberg
- 498字
- 2021-06-25 21:24:34
Providing network segmentation
OpenStack's roots in the public cloud provider space have left a significant impact on the network design at both the physical and virtual layer. In a public cloud deployment, the relationship between the tenant workload and the provider workload is based on a total absence of trust. In these deployments, the users and applications in the tenant space have no network access to any of the systems that are providing the underlying compute, network, and storage. Some access has to be provided for the end users to reach the API endpoints of the OpenStack cloud though, and so the control plane is typically multihomed on a set of physically segmented networks. This adds a layer of complexity to the deployment, but it has proven to be a best practice from a security standpoint in private cloud deployments as well.
There are typically four types of networks in an OpenStack deployment. The first is a network that is used to access the OpenStack APIs from the internet in the case of a public cloud or the intranet in the case of a private cloud. In most deployments, only the load balancers have an address on this network. This allows tight control and auditing of traffic coming in from the internet. This network is often referred to as the external network. The second type of network is a private network that is used for communication between the OpenStack control plane and the compute infrastructure. The message bus and database are exposed on this network, and traffic is typically not routed in or out of this network. This network is referred to as the management network.
Two more additional private networks are typically created to carry network and storage traffic between the compute, network, and storage nodes. These networks are broken out for quality of service as much as security and are optional, depending on whether the deployment is using SDN or network-attached storage. The segment dedicated to tenant networking is frequently referred to as the tenant network or the underlay network. Depending on the size of the deployment, there may be one or more of these underlay networks. The storage network is, not surprisingly, referred to as the storage network.
One last class of network is required for tenant workloads to access the internet or intranet in most deployments. Commonly referred to as the provider network, this is a physical network that is modeled in Neutron to provide network ports on a physical network. Floating IPs, which can be dynamically assigned to instances, are drawn from the provider networks. In some deployments, instances will be allowed to use ports from provider networks directly, without passing through the tenant network and router. Organizations that would like to have the Neutron API available, but don't want to implement SDN frequently, use this pattern of modeling the physical infrastructure in Neutron with provider networks. This allows them to use traditional physical switches and routers but still provide dynamic virtual interfaces.