官术网_书友最值得收藏!

Introduction to Kubernetes

In the previous chapter, we studied serverless frameworks, created serverless applications using these frameworks, and deployed these applications to the major cloud providers.

As we have seen in the previous chapters, Kubernetes and serverless architectures started to gain traction at the same time in the industry. Kubernetes got a high level of adoption and became the de facto container management system with its design principles based on scalability, high availability, and portability. For serverless applications, Kubernetes provides two essential benefits: removal of vendor lock-in and reuse of services.

Kubernetes creates an infrastructure layer of abstraction to remove vendor lock-in. Vendor lock-in is a situation where transition from one service provider to another is very difficult or even infeasible. In the previous chapter, we studied how serverless frameworks make it easy to develop cloud-agnostic serverless applications. Let's assume you are running your serverless framework on an AWS EC2 instance and want to move to Google Cloud. Although your serverless framework creates a layer between the cloud provider and serverless applications, you are still deeply attached to the cloud provider for the infrastructure. Kubernetes breaks this connection by creating an abstraction between the infrastructure and the cloud provider. In other words, serverless frameworks running on Kubernetes are unaware of the underlying infrastructure. If your serverless framework runs on Kubernetes in AWS, it is expected to run on Google Cloud Platform (GCP) or Azure.

As the defacto container management system, Kubernetes manages most microservices applications in the cloud and in on-premise systems. Let's assume you have already converted your big monolith application to cloud-native microservices and you're running them on Kubernetes. And now you've started developing serverless applications or turning some of your microservices to serverless nanoservices. At this stage, your serverless applications will need to access the data and other services. If you can run your serverless applications in your Kubernetes clusters, you will have the chance to reuse the services and be close to your data. Besides, it will be easier to manage and operate both microservices and serverless applications.

As a solution to vendor lock-in, and for potential reuse of data and services, it is crucial to learn how to run serverless architectures on Kubernetes. In this chapter, a Kubernetes recap is presented to introduce the origin and design of Kubernetes. Following that, we will install a local Kubernetes cluster, and you will be able to access the cluster by using a dashboard or a client tool such as kubectl. In addition to that, we will discuss the building blocks of Kubernetes applications, and finally, we'll deploy a real-life application to the cluster.

主站蜘蛛池模板: 成安县| 连南| 汶川县| 新丰县| 方山县| 神农架林区| 柞水县| 柯坪县| 互助| 安泽县| 高碑店市| 乌拉特中旗| 长兴县| 佳木斯市| 扬中市| 利辛县| 大厂| 宿迁市| 阿克苏市| 武夷山市| 北宁市| 班戈县| 昭平县| 扎兰屯市| 泸水县| 平果县| 民和| 大余县| 本溪| 长沙县| 克拉玛依市| 盐城市| 黔西| 兖州市| 若羌县| 普兰店市| 南阳市| 鄢陵县| 新化县| 安康市| 宽甸|