官术网_书友最值得收藏!

The developer workflow

Even if you're not working on a PaaS system, it is good to consider each piece of a service as something that should have a consistent environment between the developer and final deployment hosts, be able to run anywhere with minimal changes, and is modular enough to be swapped out with an API-compatible analogue if needed. For many of these cases, even a local Docker usage can go far in making the deployments easier as you can isolate each component into small pieces that don't change as your development environment changes.

To illustrate this, imagine a practical case where we are writing a simple web service that talks to a database on a system that is based on the latest Ubuntu, but our deployment environment is some iteration of CentOS. In this case, due to their vastly different support cycle lengths coordinating between versions and libraries will be extremely difficult, so as a developer, you can use Docker to provide you with the same version of the database that CentOS would have, and you can test your service in a CentOS-based container to ensure that all the libraries and dependencies can work when it gets deployed. This process will improve the development workflow even if the real deployment hosts have no containerization.

Now we will take this example in a slightly more realistic direction: what if you need to run your service without modifications of code on all currently supported versions of CentOS?

With Docker, you can have a container for each version of the OS that you can test the service against in order to ensure that you are not going to get any surprises. For additional points, you can automate a test suite runner to launch each one of the OS version containers one by one (or even better, in parallel) to run your whole test suite against them automatically on any code changes. With just these few small tweaks, we have taken an ad-hoc service that would constantly break in production to something that you almost never have to worry about as you can be confident that it will work when deployed, which is really powerful tooling to have.

If you extend this process, you can locally create Docker recipes (Dockerfiles), which we will get into in the next chapter in detail, with the exact set of steps needed to get your service running from a vanilla CentOS installation to fully capable of running the service. These steps can be taken with minimal changes by the ops teams as input to their automated configuration management (CM) system, such as Ansible, Salt, Puppet, or Chef, to ensure that the hosts will have the exact baseline that is required for things to run properly. This codified transfer of exact steps needed on the end target written by the service developer is exactly why Docker is such a powerful tool.

As is hopefully becoming apparent, Docker as a tool not only improves your development processes if they're on the deployment machines, but it can also be used throughout the process to standardize your environments and thus increase the efficiency of almost every part of the deployment pipeline. With Docker, you will most likely forget the infamous phrase that instills dread in every Ops person: "it works fine on my machine!". This, by itself, should be enough to make you consider splicing in container-based workflows even if your deployment infrastructure doesn't support containers.

The bottom line here that we've been dancing around and which you should always consider is that with the current tooling available, turning your whole deployment infrastructure into a container-based one is slightly difficult, but the addition of containers in any other part of your development process is generally not too difficult and can provide exponential workflow improvements to your team.

主站蜘蛛池模板: 合川市| 正定县| 辽阳市| 苍山县| 瓦房店市| 赤峰市| 乌恰县| 岑巩县| 衡山县| 阿城市| 曲麻莱县| 商河县| 高平市| 汶川县| 定陶县| 神木县| 南阳市| 日喀则市| 分宜县| 日喀则市| 财经| 桦川县| 临沂市| 兴山县| 邹城市| 遵化市| 茌平县| 安仁县| 江津市| 龙州县| 探索| 双城市| 顺义区| 台湾省| 临夏县| 汤阴县| 东城区| 北海市| 金秀| 东乡县| 米林县|