官术网_书友最值得收藏!

Ingress

We previously discussed how Kubernetes uses the service abstract as a means to proxy traffic to a backing pod that's distributed throughout our cluster. While this is helpful in both scaling and pod recovery, there are more advanced routing scenarios that are not addressed by this design.

To that end, Kubernetes has added an ingress resource, which allows for custom proxying and load balancing to a back service. Think of it as an extra layer or hop in the routing path before traffic hits our service. Just as an application has a service and backing pods, the ingress resource needs both an Ingress entry point and an ingress controller that perform the custom logic. The entry point defines the routes and the controller actually handles the routing. This is helpful for picking up traffic that would normally be dropped by an edge router or forwarded elsewhere outside of the cluster.

Ingress itself can be configured to offer externally addressable URLs for internal services, to terminate SSL, offer name-based virtual hosting as you'd see in a traditional web server, or load balance traffic. Ingress on its own cannot service requests, but requires an additional ingress controller to fulfill the capabilities outlined in the object. You'll see nginx and other load balancing or proxying technology involved as part of the controller framework.  In the following examples, we'll be using GCE, but you'll need to deploy a controller yourself in order to take advantage of this feature. A popular option at the moment is the nginx-based ingress-nginx controller.

An ingress controller is deployed as a pod which runs a daemon. This pod watches the Kubernetes apiserver/ingresses endpoint for changes to the ingress resource. For our examples, we will use the default GCE backend.

主站蜘蛛池模板: 乐陵市| 普定县| 亳州市| 女性| 潞城市| 关岭| 扶余县| 清水河县| 普兰店市| 平潭县| 长武县| 游戏| 赫章县| 景洪市| 喀喇| 湘潭市| 应用必备| 固原市| 汾阳市| 渑池县| 汶上县| 无为县| 仁布县| 东丽区| 临清市| 宣城市| 甘孜| 恩平市| 南部县| 含山县| 武穴市| 丰城市| 韶山市| 即墨市| 嵊州市| 黄山市| 江永县| 临江市| 定结县| 巴林左旗| 镇巴县|