Hello guys, how are you? It is good to see you all here! As per the title, this article is all about containers! Now containers seem pretty straightforward, right? But then why do we need to understand containers in depth? Or rather, why are we discussing it! First of all, this field is always changing or updating. So within a couple of months, there will be something new to learn. Another reason is that there is a lot of container terminology misuse. This doesn’t let people master containers. For example, terms like containers and images are used interchangeably! But conceptual, both are quite different. A repository has a different meaning in the world of containers and the landscape for container technologies is larger than docker. So one needs a lot of conceptual clarification.
For the ones who already know a lot about containers, please go through the article and discuss your views in the comments. And the others, please understand the concepts of containers through this article. Anyone who has the basic knowledge of computers can understand the concept of containers. So please don’t hesitate. Are you ready? Let’s rock!
You must have heard about containers, right? Containers are an adoption to the operating-system level virtualization. One can say that it is a light-weight approach to virtualization. It only provides the bare minimum that an application requires to run. Containers are super minimalist virtual machines that don’t run on a hypervisor. So what items does a container include? Here is the list:
(3) Configuration files
But how containers run in different environments? Containerization enables the containers to run in different environments by abstracting away the operating system and the physical infrastructure. The applications that are containerized share the kernel of the host operating system with other containers. The shared part of OS is “read-only”. Sometimes, there is only a single executable service or microservice within a container.
How is the size of a container measured? The size is measured in tens of megabytes and takes 1-2 seconds to provision it. The container can be created with increasing load. Similarly, it can be destroyed with a decreasing load. During the time of an update, the configuration file should be updated. After this, the new containers are created and the old ones are destroyed.
Our laptops, computers, phones, etc are built with the help of different types of hardware. For example, CPU, network card, memory, storage (SSD, disk), etc. Everyone knows this right? The hardware is already there, but how does one interact with it? The interaction happens with the help of a piece of software. It is known as the Kernel! The bridge between the hardware and the system is Kernel. It schedules processes to run, manage devices, handle exceptions, etc.
Apart from this, the rest of the operating system helps to boot and manage the userspace. The user processes are run in this space and constantly interact with the Kernel.
The most common question that everyone asks is, “What is the difference between a container and a virtual machine?”. So let us understand the concept of a virtual machine. Basically, a virtual machine is a replica of a real computer. It executes programs like a real computer. It runs on top of a physical machine using a hypervisor. In return, the hypervisor runs on a host machine or on bare-metal.
The hypervisor could be software, firmware, or hardware. It runs on a physical computer known as the host machine. The responsibility of the host machine is to provide a virtual machine with resources like RAM and CPU. The resources are shared with virtual machines as per the requirement. So for example, if one virtual machine is running a heavy application, it will need more resources. So these resources are allocated from other virtual machines that aren’t running heavy applications.
The host machine is also known as the guest machine. It contains the application as well as anything it needs to run the application. On top of that, it has an entire virtualized hardware stack of its own! This includes virtualized network adapters, storage, and CPU. What does this indicate? Yes, it has its own full-fledged guest operating system!!! So from the inside, the guest machine behaves as its own unit and dedicated resources. While outside, it is just a virtual machine that shares resources provided by the host machine!
Here, I would like to raise an important question! Why do we need the additional “Hypervisor” layer in between the virtual machine and the host machine? As discussed, the virtual machine has its own operating system. The hypervisor plays an important role in providing virtual machines with a platform. It manages and executes the guest operating system. The host computers are allowed to share their resources among all the virtual machines that run as guests.
It seems like virtual machines and containers have the same characteristics. But in reality, they are different! So here is the list of some significant differences between virtual machines and containers.
(1) The virtual machine is loaded with a complete operating system. It also has many applications. A virtual machine is capable enough to take up several GB. It depends on the guest and OS. In short, hypervisor-based virtualization is resource intensive.
(2) Hypervisors are used by virtual machines to share and manage hardware. On the other side, containers share the kernel of the host OS to access the hardware.
(3) A virtual machine has its own Kernel. It doesn’t use or share the Kernel of the host OS. Thus, they are isolated from each other at a deep level.
(4) Virtual machines can run different operating systems if they are residing on the same server. So for example, one virtual machine can run Windows while the neighboring virtual machine might be running Mac OS.
(5) The containers are bound by the host OS. So containers on the same server use the same OS. Some might say, they don’t have their own freedom!
(6) The containers virtualize the underlying operating system. Whereas the virtual machines virtualize the underlying hardware.
Linux Containers are nothing less than magic. They are used extensively because they are effective. Linux cgroups created Linux Containers. It is also known as LXC. The first major implementation of containers was LXC. It took the advantage of cgroups and namespace isolation for creating a virtual environment. It had a separate process and networking space.
This lead to independent and isolated user spaces. In short, the current containers concepts are derived from LXC. The earlier versions of Docker were built directly on top of LXC.
Docker is in trend and a huge success. It is one of the most widely used container technology. In fact, when people refer to containers, they mostly mean Docker. Other than Docker, there are some open source container techs like rkt by CoreOS. Also, large companies have built their own containers engines. For example, lmctfy by Google.
So Docker is now the industry standard when it comes to containerization. How is it built? On cgroups and namespacing provided by the Linux kernel and Windows.
A docker container is made up of different layers. This includes images and binaries that are packed into a single package. The base image has the operating system of the container. This OS and the host OS could be different or same. How is the OS of the container? It is in the form of an image. If you compare this OS with the host OS, there is a difference! The host OS is a full operating system, whereas this OS is not full. The image is just the file system and binaries for the OS. While, a full OS include the file system, binaries, and the Kernel.
There are multiple images on top of the base image. They together make a portion of the container. The arrangement is somewhat dynamic. For example, on top of the base image, there is an image that contains the apt-get dependencies. Above that, there can be an image containing the application binary. But the most interesting part is, if there are two containers with image layers 1,2,3 and 1,2,4 then you only have to store one copy of each image! Locally as well as in the repository. This is how the Docker union file system operates.
Docker is loaded with many cool features like:
(1) Copy on write.
(3) Docker daemon.
(4) Version controlled repositories and more.
Process isolation is just one property of containers. But apart from it, there are many other beneficial properties.
(1) A container serves as a self-isolated unit. This means that it can run anywhere. At each of the instances, the container is exactly identical. The host OS doesn’t matter. It could be CentOS, Ubuntu, MacOS or
(2) One can consider the container to be a standardized unit or a computer. Generally, it is said that each container runs a single web server, a single shard of the database, etc. So for scaling an application, one can simply scale the number of containers.
Here, each container is allocated a fixed resource configuration like CPU, RAM, etc. So scaling the application means scaling the number of containers and not the individual resource primitives. This helps the engineers with a much easier abstraction when the applications need to be scaled up or down!
(3) A container is the best tool for implementing the microservice architecture. Each of the microservice is a set of co-operating containers. For example, it is possible to implement the Redis micro service with a single master container and multiple slave containers. The microservice architecture has a lot of advantages as compared to the monolithic or traditional approach.
(1) Running containers is less resource intensive. So one can add more computing workload on the same server.
(2) As compared to virtual machines, the average size of a container is within the range of tens of MB. Virtual machines consume several gigabytes. A server can host more containers.
(3) Containers are quick! Their provisioning just takes a few seconds. The response time is quick when it comes to the user activity. Containers help decrease the time needed for development, testing, and deployment.
(4) Finding errors and solving them is easy with containers. Why? As there is no difference between running an application locally or on a test server.
(1) Security is a concern with container-based virtualization as compared to traditional virtual machines. In containers, the Kernel and other components of the host operating system are shared. They have the root access! So containers are less isolated from each other. Overall, it depends on the type of application and modifications. This is just a generalized approach.
(2) There is less flexibility in operating systems. If you want to run containers with different operating systems, you need to start a new server.
Containers are awesome! I love them. They are being adopted at a remarkable pace by small-scale and mid-scale businesses. It is not just the tech giants using containers. Enterprises are looking forward to adoption in the production environment. Some of the most famous examples of containers on a grand scale are Google Search, Google App Engine, and Twitter.
While on the other side, virtual machines are considered a mature technology with a higher level of security. Containers are existing in a dynamic and changing world. As they optimize resource utilization and flexibility, enterprise IT organizations love it.
For more on containers, click here: