- Kubernetes on AWS
- Ed Robinson
- 629字
- 2021-06-10 18:41:21
Google's Infrastructure for the Rest of Us
Kubernetes was originally built by some of the engineers at Google who were responsible for their internal container scheduler, Borg.
Learning how to run your own infrastructure with Kubernetes can give you some of the same superpowers that the site reliability engineers at Google utilize to ensure that Google's services are resilient, reliable, and efficient. Using Kubernetes allows you to make use of the knowledge and expertise that engineers at Google and other companies have built up by virtue of their massive scale.
Your organization may never need to operate at the scale of a company such as Google. You will, however, discover that many of the tools and techniques developed in companies that operate on clusters of tens of thousands of machines are applicable to organizations running much smaller deployments.
While it is clearly possible for a small team to manually configure and operate tens of machines, the automation needed at larger scales can make your life simpler and your software more reliable. And if you later need to scale up from tens of machines to hundreds or even thousands, you'll know that the tools you are using have already been battle tested in the harshest of environments.
The fact that Kubernetes even exists at all is both a measure of the success and a vindication of the open source/free software movement. Kubernetes began as a project to open source an implementation of the ideas and research behind Google's internal container orchestration system, Borg. Now it has taken on a life of its own, with the majority of its code now being contributed by engineers outside of Google.
The story of Kubernetes is not only one of Google seeing the benefits that open sourcing its own knowledge would indirectly bring to its own cloud business, but it's also one of the open source implementations of the various underlying tools that were needed coming of age.
Linux containers had existed in some form or another for almost a decade, but it took the Docker project (first open sourced in 2013) for them to become widely used and understood by a large enough number of users. While Docker did not itself bring any single new underlying technology to the table, its innovation was in packaging the tools that already existed in a simple and easy-to-use interface.
Kubernetes was also made possible by the existence of etcd, a key-value store based on the Raft consensus algorithm that was also first released in 2013 to form the underpinnings of another cluster scheduling tool that was being built by CoreOS. For Borg, Google had used an underlying state store based on the very similar Paxos algorithm, making etcd the perfect fit for Kubernetes.
Google were prepared to take the initiative to create an open source implementation of the knowledge which, up until that point, had been a big competitive advantage for their engineering organization at a time when Linux containers were beginning to become more popular thanks to the influence of Docker.
However, in my view, the simplicity of the language itself makes it such a good choice for open source infrastructure tools, because such a wide variety of developers can pick up the basics of the language in a few hours and start making productive contributions to a project.
If you are interested in finding out more about the go programming language, you could try taking a look at https://tour.golang.org/welcome/1 and then spend an hour looking at https://gobyexample.com.
- 現(xiàn)代測控系統(tǒng)典型應(yīng)用實(shí)例
- 協(xié)作機(jī)器人技術(shù)及應(yīng)用
- B2B2C網(wǎng)上商城開發(fā)指南
- 80x86/Pentium微型計算機(jī)原理及應(yīng)用
- 21天學(xué)通Java
- AI 3.0
- 網(wǎng)絡(luò)布線與小型局域網(wǎng)搭建
- 智能生產(chǎn)線的重構(gòu)方法
- 計算機(jī)組網(wǎng)技術(shù)
- 格蠹匯編
- 自動化生產(chǎn)線安裝與調(diào)試(三菱FX系列)(第二版)
- Linux Shell編程從初學(xué)到精通
- INSTANT Puppet 3 Starter
- Windows安全指南
- 伺服與運(yùn)動控制系統(tǒng)設(shè)計