How Do Docker Containers Work?

Containers are high on the agenda for digitization strategies with a focus on IT architectures. Containerization is considered the most significant change in the IT world since the introduction of hardware virtualization with virtual machines (VMs). This new variant of virtualization gained momentum with the trend towards so-called microservices and away from monolithic applications.

Similar to VMs, containers are a kind of container for applications in which they can run. However, while VMs map an entire computing environment, containers contain only the important data needed to run the application. This includes operating system components such as libraries and binaries. This enables a lighter form of virtualization. Probably the best-known container technology is Docker, which is why the term “Docker Container” is on everyone’s lips.

Docker Containers represent encapsulated units that can be executed independently of each other, no matter where they are located. Let’s compare them to freight containers in which one or more males sit and work. These males are in fact applications, such as PHP, MySQL, and Apache, and sit together in one container (see graphic for an example). For the males, it makes no difference whether the freight container is located in Munich, New York or Sydney, because it always looks the same from the inside. Accordingly, the same conditions prevail. The same applies to the applications in the software container.

Difference between Virtual Machines

Containers are described as a lighter form of virtualization because several of them can run with isolated applications within one operating system installation. If we want to achieve this separation of applications by means of hardware virtualization, two complete VMs including the operating system must be started for this purpose. This means that VMs require significantly more resources.

Unlike VMs, containers do not emulate the hardware, but the operating system. The VMs run directly on a physical server which is virtualized with the help of a so-called hypervisor such as VMware ESXi. The virtualization of containers takes place on a higher level, without a hypervisor. Here, the installed operating system with the Container Engine takes care of the virtualization. This type of virtualization is much more complex compared to the emulation of complete hardware.

Advantages of containers

The new technology is particularly popular with developers because Docker Containers are much more efficient and resource-saving compared to VMs: They require less CPU and RAM.

A further advantage is their portability. As closed application packages, they can be executed on a wide variety of systems. This means that they can be used not only for offline development but also for function smoothly on productive servers, regardless of the selected infrastructure or cloud platform. This results in greater speed and consistency in development, debugging, and testing. The discussions between development and operations are a thing of the past, but it still worked for us locally.

Containers are highly scalable. If additional instances of an application are needed, e.g. because the traffic on a website increases due to a good marketing campaign, new ones can easily be started and stopped. Within seconds, hundreds of containers can be started or stopped. The management of this large number can be facilitated by orchestration solutions.

Container Management

To efficiently manage a large number of containers an orchestration solution is required. The best known are Kubernetes, Docker Swarm, and Amazon’s Elastic Container Service. Among other things, they ensure starting and stopping, optimal placement on available compute nodes, or the automated adjustment of required compute nodes in case of load changes.

Container Images

Now that the advantages of the new technology are obvious, the question arises on how it can be built or used. The basis for containers is so-called images, a simple file that eliminates the need to install and update software. Images contain all components to run an application platform-independently. Thus an image can be transferred to another system only by a simple copy process. The container can then be started from the image.

Images are made available over a Registry, which stores, administers, and makes them available. The most well-known public Registry is Docker Hub.

The Container Life Cycle

An image is of course not set in stone and can be adjusted as desired. This adaptation process is also called the container life cycle. We would like to illustrate this with an example:

  • Typically, the life of a Docker Container begins with the download of an image from a registry. As mentioned above a registry is a kind of warehouse for container images.
  • From it, we now download an example image. By starting the image on our docker host we create the actual container. In our example, it contains an Ubuntu operating system and an Apache webserver.
  • Now we can adjust it as needed, for example by adding another component. In our case we use PHP.
  • For permanent storage, an image is created from the container again. The new image now consists of Ubuntu, Apache, and PHP.
  • Finally, the image is stored in the registry again and can then be used as a basis for other extensions.

What is to be considered?

Last but not least here are a few tips and tricks:

  • Ideally, only one service or process should be implemented per container. Exceptions to the rule make sense if applications are closely interwoven or interdependent. For example, with PHP it can make sense to have Nginx and PHP-fpm in the same container.
  • No user data, i.e. persistent data, should be stored in the container. By default, containers are to be understood as “Immutable Infrastructure”. This means that they only exist as long as they do something. When terminating or redeploying, all data created during runtime disappears as well. Accordingly, an external, persistent volume must be used for user data.
  • For higher quality and reusability, automation tools like Terraform, Ansible, and Jenkins should be used. With the tools mentioned upfront and the consideration of some do’s and don’ts, you get a very modern, dynamic, and highly scalable environment.

Conclusion

Using Docker is one of the best options compare to Virtual Machines as it will not be time-consuming and does not take much space. Besides, Docker is the future of many companies to use it on their server to have faster web apps and native apps. Interested in deploying an Azure DevOps continuous delivery in your company for better project deployment? Heyooo will be happy to support you in the implementation!

Related Article