Hello friends, it is awesome to see you all again! How are you doing? I am doing great! So let us continue our journey of microservice architecture based articles. Our previous article was regarding the circuit breaker patterns. Wasn’t it interesting to know those patterns? So now it is time to discuss the deployment patterns of microservices. In a monolithic application, deployment means running multiple, identical copies of a single large application. The deployment isn’t straightforward, but it is simple as compared to the microservices. A microservice application consists of many services. These are written in different languages and frameworks. Each and every service has its own specifications for deployment.
For example, one needs to run a certain number of instances of each service. These instances should be based on the demand for that service. The service instance should also be provided with appropriate CPU, memory, and IO resources.There are a lot of challenges when it comes to deployment and despite all those challenges, the deployment has to be faster, reliable, and cost-effective. There are different strategies available to deploy microservices, known as the deployment patterns. We will discuss them in this article.
In this pattern, one has to provision physical or virtual hosts. On each of these hosts, multiple service instances are executed. This can be considered as a traditional approach to application deployment. Each service instance runs at a well-known port on one or more hosts and the machines are treated like pets. This pattern has more than one variants. One variant is that for each service instance, there is a process or process group. For example, one might deploy a Java service instance as a web application on Apache Tomcat server. The Node.js service instance might consist of a parent process and more child processes.
Another variant to this pattern includes running multiple service instances within the same process or process group. For example, one could deploy multiple Java web applications on the same Apache Tomcat server or can also run multiple OSGI bundles in the same OSGI container. OSGI stands for Open Services Gateway Initiative.
The pattern has its own advantages and disadvantages. Let us begin with the advantages. The first advantage is that this pattern offers a relatively efficient resource usage. This is because multiple service instances share the server and the operating system. The efficiency increases when it is about a process or process group running multiple service instances. For example, multiple web applications sharing the same Apache Tomcat server and JVM. The second advantage is that the deployment of a service instance is pretty quick. One can simply copy the service to some host and start. For the services written in Java, one has to simply copy the JAR or WAR file. For languages like Node.js or Ruby, the source code is copied. In both the cases, the number of bytes copied is small. Due to negligible overhead, the service can start fast. If the service is its own process, one can simply start it. Else, the service is a part of several instances running within the same container process or process group. So one can either dynamically deploy it into the container or restart the container.
Now, it’s time to discuss the disadvantages. The first disadvantage is that there is a negligible isolation of the service instances. If the service instance is a separate process, then it is a different case! Due to this, one can’t limit the resources of each instance uses. Definitely, monitoring each service instance resource becomes easy, but what about the resource usage? An inappropriate service instance could simply consume all the memory or CPU of the host! So this definitely could be a huge blow to the system. The second disadvantage is that the operations team should know each and every specific detail for deploying a service. Services can be written in many languages and frameworks, so the team needs to focus on a lot of details and the development team has to share a lot of details as well! So the complexity increases and the chances of errors increase.
So these are the advantages and disadvantages of this pattern. Now let us discuss another pattern.
This is another way of deploying microservices. It is known as the Service Instance per Host pattern. Here, each service instance is run in isolation on its own host. The pattern is further bifurcated into two types of specializations. The first one being Service Instance per Virtual Machine and the second one being Service Instance per Container. Let us discuss both of them.
In this specialization, each service is packaged as a virtual machine image like Amazon EC2 AMI. So here, each service instance is a virtual machine and is launched using that particular VM image, for example, EC2 instance. Interestingly, this approach is primarily used by Netflix to deploy the video streaming service. Netflix packages each and every service in the form of EC2 AMI with the help of an Animator. So each running service instance is an EC2 instance.
For building VM’s or virtual machines, there are numerous tools available. One can configure the continuous integration servers to invoke the Animator for packaging services. An example of the continuous integration server is Jenkins. Another option for creating VM images automatically is by using Packer.io. It is different from Animator because it supports numerous virtualization techniques and technologies like EC2, DigitalOcean, VirtualBox, and VMware. On the other side, a company known as Boxfuse is building VM images in a compelling way! It packages the Java application as a minimal VM image. The images are fast in terms of building, booting, and securing because they are exposed to a limited attack surface. So this overcomes the drawbacks of VMs!
The Service Instance per Virtual Machine pattern has numerous advantages. The first advantage is that each service runs in complete isolation. The amount of CPU is fixed and the memory won’t be able to acquire or use resources from any other services. This truly eradicates a major threat of unnecessary memory utilization. The second advantage is that one can leverage mature cloud infrastructure by deploying microservices as VMs. On top of that, clouds like AWS provide useful features like load balancing and autoscaling. The third advantage of deploying service as a VM is the encapsulation of the service implementation technology. Any service that has been packaged as a VM becomes a type of black box. The VMs management API becomes the API for deploying the service and it also becomes simpler as well as reliable.
There are a few disadvantages of this approach. The first one is that the resource utilization is less efficient. The service instance has an overhead of the entire VM. It also includes the operating system! In a traditional or typical public Iaas, the VMs come in fixed sizes. There is a high possibility that the VM shall be underutilized. The public IaaS typically charges for VMs regardless of whether they are busy or idle. An IaaS like AWS provides auto scaling, but it is still difficult to react quickly to the changes that are already in demand. This may lead to over provisioning of VMs and will ultimately lead to the rise in the deployment cost. The second disadvantage is that deploying a new version of the service is time-consuming and slow. VM images are usually slow to build. It depends on the size. This also makes them slow in instantiating. But this can change, depends on the type of VMs. These days, lightweight VMs like Boxfuse make a difference. The third disadvantage is that there is too much of heavy lifting! You didn’t get it right? Yes, there could be a lot of undifferentiated heavy lifting. The solution is to use lightweight VMs. They handle the overhead of building and managing the VMs. Understand that this activity is time-consuming but necessary. Let us now have a look at the other type of specialization pattern.
In this specialization pattern, each service instance runs in its own container. The containers offer a virtualization mechanism at the operating system level. Each container consists of one or more processes that run in a sandbox. In terms of processes, they have their own port namespace and root filesystem. The user is allowed to limit the container’s memory and CPU resources. Some container implementation also has an I/O rate limit. Docker and Solaris Zones are an example of container technology. Here, the service is packaged as a container image. It is a filesystem image consisting of the applications and libraries required to run the service. Some container images consist of a complete Linux root filesystem while others are more lightweight.
The packaged service as a container image is launched in one or more containers. There are multiple containers on each virtual/physical host. For managing containers, cluster manager like Kubernetes must be used. It considers the hosts as a pool of resources and decides where to place each container. The advantages of containers are similar to VMs. Containers isolate the service instances and can help in monitoring the resources easily. They also encapsulate the technology that is being used to implement services. The container management API serves as an API for managing services.The difference between VMs and containers is that containers are a lightweight technology. Container images are fast to build and also start quickly.
The disadvantages of containers are that they aren’t mature as compared to the VMs. The containers aren’t secure because they share the kernel of the host OS with each other. Another disadvantage is that you are responsible for the undifferentiated heavy lifting of administering the container images. You must administer the container infrastructure and the VM infrastructure that it runs on. The containers are generally deployed on an infrastructure having per VM pricing, so the chances of incurring an extra cost of over-provisioning are more!
There is also a unique concept that has gained popularity. It is known as the server-less deployment. It sidesteps issues like choosing between deploying services in containers or VMs. Let us have a look at this concept.
Serverless deployment helps in hiding any concept of reserved or preallocated resources. So physical hosts, virtual hosts, and containers are simply hidden. In this concept, the infrastructure takes the service code and runs it. Each request is charged based on the resources consumed. For deploying service using this concept, the code is packaged in forms like a zip file, it is then uploaded to the deployment architecture. Only the performance characteristics need to be specified. The infrastructure is actually a utility that is being operated by a public cloud provider. It uses containers or virtual machines for isolating the services. But these details are encapsulated or hidden. There is no need to manage low-level infrastructure such as operating systems, virtual machines, etc.
The first advantage of this approach is that it eliminates the need to spend time on undifferentiated heavy lifting and managing low-level infrastructure. So the developer can focus on the code. Secondly, the architecture is elastic and it automatically scales the services for handling load. Thirdly, paying for each request rather than provisioning might prove to be cost-effective. The disadvantages of serverless deployment are the deployment has many constraints as compared to the other approaches. For example, AWS Lambda only supports a few languages. It is only suitable for deploying stateless applications. Second, the applications must start quickly. If the service consumes more time to start, serverless deployment is not a good fit. Third, the risk of latency is high. The time it takes for the infrastructure to provide an instance of your function and for the function to initialize might result in significant latency. So the application might exhibit high latency during sudden massive spikes in load.
After discussing the circuit breaker pattern in our previous article, it was time to discuss the deployment. So this article includes the deployment patterns. First of all, we understand the basics of deploying a microservice application. It is a challenge to deploy it. There are numerous services that could be written in different languages and frameworks. Each one could be considered as a mini-application. To deploy them efficiently, there are different patterns. So we discuss these patterns, starting from the Multiple Service Instances per Host Pattern. We understand its basics and discuss its advantages/disadvantages.
We then switch over to the next pattern known as the Service Instance per Host Pattern. This pattern is further bifurcated into two specializations. The first one being the Service Instance per Virtual Machine Pattern and the second one being the Service Instance per Container Pattern. After discussing these patterns, we discuss the in-trend pattern, known as the serverless deployment pattern. So this is all about microservices and deployment patterns. See you soon guys, in the upcoming article of the microservice article series. Take care!
Here is the link to the previous article of this series.