So far, we have managed to build, run, and expose a single container on our Minikube instance. If you are used to using Docker to perform similar tasks, you might notice that although the steps we took were quite simple, there is a little more complexity in getting a simple hello world application like this up and running.
A lot of this has to do with the scope of the tool. Docker provides a simple and easy to use workflow for building and running single containers on a single machine, whereas Kubernetes is, of course, first and foremost a tool designed to manage many containers running across multiple nodes.
In order to understand some of the complexity that Kubernetes introduces, even in this simple example, we are going to explore the ways that Kubernetes is working behind the scenes to keep our application running reliably.
When we executed kubectl run, Kubernetes created a new sort of resource: a deployment. A deployment is a higher level abstraction that manages the underlying ReplicaSet on our behalf. The advantage of this is that if we want to make changes to our application, Kubernetes can manage rolling out a new configuration to our running application:
The architecture of our simple Hello application
When we executed kubectl expose, Kubernetes created a service with a label selector that matched the pods under management by the deployment that we referenced.