- Getting Started with Kubernetes(Second Edition)
- Jonathan Baier
- 355字
- 2021-07-02 22:51:30
Scheduling example
Let's take a look at a quick example of setting some resource limits. If we look at our K8s dashboard, we can get a quick snapshot of the current state of resource usage on our cluster using https://<your master ip>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard and clicking on Nodes on the left-hand side menu.
We will see a dashboard as shown in the following screenshot:

This view shows the aggregate CPU and memory across the whole cluster, nodes, and master. In this case, we have fairly low CPU utilization, but a decent chunk of memory in use.
Let's see what happens when I try to spin up a few more pods, but this time, we will request 512 Mi for memory and 1500 m for the CPU. We'll use 1500 m to specify 1.5 CPUs; since each node only has 1 CPU, this should result in failure. Here's an example of RC definition:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-js-constraints
labels:
name: node-js-constraints
spec:
replicas: 3
selector:
name: node-js-constraints
template:
metadata:
labels:
name: node-js-constraints
spec:
containers:
- name: node-js-constraints
image: jonbaier/node-express-info:latest
ports:
- containerPort: 80
resources:
limits:
memory: "512Mi"
cpu: "1500m"
Listing 2-12: nodejs-constraints-controller.yaml
To open the preceding file, use the following command:
$ kubectl create -f nodejs-constraints-controller.yaml
The replication controller completes successfully, but if we run a get pods command, we'll note the node-js-constraints pods are stuck in a pending state. If we look a little closer with the describe pods/<pod-id> command, we'll note a scheduling error (for pod-id use one of the pod names from the first command):
$ kubectl get pods
$ kubectl describe pods/<pod-id>
The following screenshot is the result of the preceding command:

Note, in the bottom events section, that the WarningFailedScheduling pod error listed in Events is accompanied by fit failure on node....Insufficient cpu after the error. As you can see, Kubernetes could not find a fit in the cluster that met all the constraints we defined.
If we now modify our CPU constraint down to 500 m, and then recreate our replication controller, we should have all three pods running within a few moments.
- Oracle SOA Governance 11g Implementation
- 中文版Photoshop CS5數碼照片處理完全自學一本通
- 計算機應用基礎·基礎模塊
- 輕松學Java
- 數據運營之路:掘金數據化時代
- INSTANT Varnish Cache How-to
- 水晶石精粹:3ds max & ZBrush三維數字靜幀藝術
- Docker High Performance(Second Edition)
- 計算機網絡原理與技術
- INSTANT Heat Maps in R:How-to
- 嵌入式Linux系統實用開發
- 步步驚“芯”
- Hands-On DevOps
- Machine Learning in Java
- Hands-On Data Analysis with NumPy and pandas