In simple terms, Docker containers provide an isolated and secure environment for the application components to run. The isolation and security allows one or many containers to run simultaneously on a given host. Often, for simplicity's sake, Docker containers are loosely termed as lightweight-VMs (Virtual Machine). However, they are very different from traditional VMs. Docker containers do not need hypervisors to run like virtual machines, and thus, multiple containers can be run on a given hardware combination.
Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system, all of which can amount to tens of GBs. On the other hand, Docker containers include the application and all of its dependencies, but share the kernel with other containers, running as isolated processes in the user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. This very aspect makes them look like a real-world container. The following diagram sums it all up:
Figure 1.18: Difference between traditional VMs and Docker containers
Listed next are some of the key building blocks of the Docker technology:
Docker Container: An isolated and secured environment for applications to run.
Docker engine: A client-server application having the following components:
Daemon process used to create and manage Dockerobjects, such as images, containers, networks, and data volumes.
A REST API interface
A command-line interface (CLI) client
Docker client: A client program that invokes the Docker engine using APIs.
Docker host: The underlying operating system sharing kernel space with the Docker containers. Until recently, the Windows OS needed Linux virtualization to host Docker containers.
Docker hub: The public repository used to manage Docker images posted by various users. Images made public are available for all to download in order to create containers using those images.