- Kubernetes on AWS
- Ed Robinson
- 1011字
- 2021-06-10 18:41:30
Preparing the network
We will set up a new VPC in your AWS account. A VPC, or virtual private cloud, allows us to have a private network that is isolated from all the other users of EC2 and the internet that we can launch instances onto.
It provides a secure foundation that we can use to build a secure network for our cluster, as shown in the following command:
$ VPC_ID=$(aws ec2 create-vpc --cidr-block 10.0.0.0/16 --query "Vpc.VpcId" --output text)
The VpcId will be unique to your account, so I am going to set a shell variable that I can use to refer to it whenever we need. You can do the same with the VpcId from your account, or you might prefer to just type it out each time you need it.
The rest of the steps in this chapter follow this pattern, but if you don't understand what is happening, don't be afraid to look at the shell variables and correlate the IDs with the resources in the AWS console, as follows:
$ echo $VPC_ID
Kubernetes names your instances based on the internal DNS hostnames that AWS assigns to them. If we enable DNS support in the VPC, then we will be able to resolve these hostnames when using the DNS server provided inside the VPC, as follows:
$ aws ec2 modify-vpc-attribute \ --enable-dns-support \ --vpc-id $VPC_ID $ aws ec2 modify-vpc-attribute \ --enable-dns-hostnames \ --vpc-id $VPC_ID
Kubernetes makes extensive use of AWS resource tags, so it knows which resources it can use and which resources are managed by Kubernetes. The key for these tags is kubernetes.io/cluster/<cluster_name>. For resources that might be shared between several distinct clusters, we use the shared value. This means that Kubernetes can make use of them, but won't ever remove them from your account.
We would use this for resources such as VPCs. Resources where the life cycle is fully managed by Kubernetes have a tag value of owned and may be deleted by Kubernetes if they are no longer required. Kubernetes typically creates these tags automatically when it creates resources such as instances in an autoscaling group, EBS volumes, or load balancers.
Let's add a tag to our new VPC so that Kubernetes will be able to use it, as shown in the following command:
aws ec2 create-tags \ --resources $VPC_ID \ --tags Key=Name,Value=hopper \ Key=kubernetes.io/cluster/hopper,Value=shared
When we created our VPC, a main route table was automatically created. We will use this for routing in our private subnet. Let's grab the ID to use later, as shown in the following command:
$ PRIVATE_ROUTE_TABLE_ID=$(aws ec2 describe-route-tables \ --filters Name=vpc-id,Values=$VPC_ID \ --query "RouteTables[0].RouteTableId" \ --output=text)
Now we will add a second route table to manage routing for the public subnets in our VPC, as follows:
$ PUBLIC_ROUTE_TABLE_ID=$(aws ec2 create-route-table \ --vpc-id $VPC_ID \ --query "RouteTable.RouteTableId" --output text)
Now we will give the route tables names so we can keep track of them later:
$ aws ec2 create-tags \ --resources $PUBLIC_ROUTE_TABLE_ID \ --tags Key=Name,Value=hopper-public $ aws ec2 create-tags \ --resources $PRIVATE_ROUTE_TABLE_ID \ --tags Key=Name,Value=hopper-private
Next, we are going to create two subnets for our cluster to use. Because I am creating my cluster in the eu-west-1 region (Ireland), I am going to create these subnets in the eu-west-1a subnet. You should choose an availability zone for your cluster from the region you are using by running aws ec2 describe-availability-zones. In Part 3, we will learn how to create high-availability clusters that span multiple availability zones.
Let's start by creating a subnet for instances that will only be accessible from within our private network. We are going to use a /20 netmask on the CIDR block, as shown in the following command; with this, AWS will give us 4089 IP addresses that will be available to be assigned to our EC2 instances and to pods launched by Kubernetes:
$ PRIVATE_SUBNET_ID=$(aws ec2 create-subnet \ --vpc-id $VPC_ID \
--availability-zone eu-west-1a \ --cidr-block 10.0.0.0/20 --query "Subnet.SubnetId" \ --output text) $ aws ec2 create-tags \ --resources $PRIVATE_SUBNET_ID \ --tags Key=Name,Value=hopper-private-1a \ Key=kubernetes.io/cluster/hopper,Value=owned \ Key=kubernetes.io/role/internal-elb,Value=1
Next, let's add another subnet to the same availability zone, as shown in the following command. We will use this subnet for instances that need to be accessible from the internet, such as public load balancers and bastion hosts:
$ PUBLIC_SUBNET_ID=$(aws ec2 create-subnet \ --vpc-id $VPC_ID \ --availability-zone eu-west-1a \ --cidr-block 10.0.16.0/20 --query "Subnet.SubnetId" \ --output text) $ aws ec2 create-tags \ --resources $PUBLIC_SUBNET_ID \ --tags Key=Name,Value=hopper-public-1a \ Key=kubernetes.io/cluster/hopper,Value=owned \ Key=kubernetes.io/role/elb,Value=1
Next, we should associate this subnet with the public route table, as follows:
$ aws ec2 associate-route-table \ --subnet-id $PUBLIC_SUBNET_ID \ --route-table-id $PUBLIC_ROUTE_TABLE_ID
In order for the instances in our public subnet to communicate with the internet, we will create an internet gateway, attach it to our VPC, and then add a route to the route table, routing traffic bound for the internet to the gateway, as shown in the following command:
$ INTERNET_GATEWAY_ID=$(aws ec2 create-internet-gateway \ --query "InternetGateway.InternetGatewayId" --output text) $ aws ec2 attach-internet-gateway \ --internet-gateway-id $INTERNET_GATEWAY_ID \ --vpc-id $VPC_ID $ aws ec2 create-route \ --route-table-id $PUBLIC_ROUTE_TABLE_ID \ --destination-cidr-block 0.0.0.0/0 \ --gateway-id $INTERNET_GATEWAY_ID
In order to configure the instances in the private subnet, we will need them to be able to make outbound connections to the internet in order to install software packages and so on. To make this possible, we will add a NAT gateway to the public subnet and then add a route to the private route table for internet-bound traffic, as follows:
$ NAT_GATEWAY_ALLOCATION_ID=$(aws ec2 allocate-address \ --domain vpc --query AllocationId --output text) $ NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway \ --subnet-id $PUBLIC_SUBNET_ID \ --allocation-id $NAT_GATEWAY_ALLOCATION_ID \ --query NatGateway.NatGatewayId --output text)
At this stage, you may have to wait a few moments for the NAT gateway to be created before creating the route, as shown in the following command:
$ aws ec2 create-route \ --route-table-id $PRIVATE_ROUTE_TABLE_ID \ --destination-cidr-block 0.0.0.0/0 \ --nat-gateway-id $NAT_GATEWAY_ID