Is Docker and Kubernetes offering the “Best Architecture”?

Level Up The Kubernetes Bible for Beginners & Developers Article Image
The Kubernetes Bible for Beginners & Developers
July 20, 2018
Level Up How Does The Kubernetes Networking Pods Work? : Part 1 Article Image
How Does The Kubernetes Networking Work? : Part 1
August 10, 2018
Show all

Is Docker and Kubernetes offering the “Best Architecture”?

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

Hi there, I am back with another article. But this time, I am in the mood of resolving some myths.

The world is changing every second and so is the technology. Regarding Docker and Kubernetes, you must be listening to a lot of things. Especially, the comparison between their architecture. A lot of people wonder that which one is better?

There are many other questions and I am mentioning all of them. Why? The only reason is that it will help you understand Kubernetes and Docker. So how Docker and Kubernetes have changed the software development? Does any of them offer a strong architecture? Is it possible to unify development and integration processes? If yes, what are the restrictions? Will this reduce complications for developers?

First of all, understand that a comparison between Docker and Kubernetes is not as easy as you think. What is better? Docker or Kubernetes? I believe there is “no better” because they both are different. Docker is the bus while Kubernetes is the bus port. So what is important? Certainly, both are important. You need both of them.

So in this article, we will travel from the real life to the development processes to the architecture and back to real life.

Further, we will identify different components and principles that are a part of the architecture. In the end, the conclusion might surprise you or please you. It depends on your perception and experience with Docker and Kubernetes.

 

The first move from real life to development workflows

Do you know why the development processes are important? If there are no development processes, there is no well-directed generalized approach. A development process reduces the time between the birth of an idea to its delivery. It simplifies the process and maintains the quality.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

There are two types of ideas, one is good and the other is bad. There is no third type of an intermediate idea in development. The idea can be good or bad but it can only be measured by the implementation. If it is a bad idea, you implement and rollback! If it is a good idea, you just carry on. The rollback is controlled by a robot i.e. automation.

Out of all this, continuous integration and delivery system have emerged as a lifesaver. It has made things easier. If you have an idea and the code, implement it! But there is one little problem with integration and delivery system. The process is difficult to formalize in isolation from the technology and business processes, specific to your company.

So how was this problem solved?

Now this problem was solved with the help of Docker and Kubernetes. Both of them appeared as Messiahs. The level of abstraction and the ideological approach solved almost 80% of the problem. Mind well, 80% is a very good percentage in the development sector. The 20% is still there and one needs to be a creative genius for solving it. It depends on the type of application and the way you solve it.

Docker solves the problem of development workflow. Docker offers a simple process and is sufficient for most of the work environments.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

Image Source : https://cdn-images-1.medium.com/max/800/0*nDrc_jeOKTMS7akB.

With the help of this approach, one can automate and unify everything.

Introduction to the development environment

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

First of all, the project must comprise of a docker-compose.yml file. The advantage of this file is that it will remove the burden of running the application/service on the local machine. The developer doesn’t need to think about it. In fact, just a simple docker-compose up command is enough to start your application. The command also takes care of the dependencies, populating the database with fixtures, uploading the local code inside the container, enabling code trace, and responding at the expected port.

In addition to all this, you also need not worry about starting, committing changes, framework selection, etc. Each and everything is described in the standard instructions beforehand. It is dictated by the service templates as per the setups like frontend, backend, and worker.

 

Time for the automated testing

Have you heard about the “Black Box”? It records everything, even the last seconds before a plane crash. What happened? How did it happen? All this is obtained from the black box. Similarly, there a black box here.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

Image Source : https://cdn-images-1.medium.com/max/800/0*B5Qn9D5W4lscB86h.

Like the original black box, our container black box stores everything. All the binary data, 1 or 0, etc is stored in the black box. So how automation happens? It’s easy, you have a set of commands and the docker-compose.yml describes all its dependencies. This leads to automation and unified testing. There is no need to focus on the implementation details.

 

The Systems Delivery

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

Image Source: https://cdn-images-1.medium.com/max/800/0*DC5Ubn7Y5seJPeSO.

The location and the time for installing your project don’t matter. There is no difference as to which part of the whole ecosystem you’re going to be installing. The result of the installation process will always be the same. The thing that matters the most is “idempotence”. From your side, specify the variables that control the installation.

 

There is an algorithm for all this. Let us list it down stepwise.

(1) Collect images from Dockerfiles.

(2) Use a meta-project to deliver the images. They should be delivered Kubernetes via Kube API. The input parameters required are :

(a) A Kube API Endpoint

(b) A secret object varying for different contexts (local/showroom/staging/production)

(c) System names and the tags of the Docker images for these systems.

For example, consider a meta-project consisting of all the systems and services. This means that the project describes the arrangement of the ecosystem and it also describes how updates are delivered to it. For this, I will use Ansible playbooks for integration with the Kube API. But, there are other options as well! Overall, you need to think in a centralized direction for managing the architecture. Once you are confident with it, you can conveniently manage the services/systems.

The installation of an environment is required in the showroom (for manual checks and debugging the system), staging (for near-live environments and integration), and production (the actual environment for the end-user).

 

Continuous Delivery & Integration

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

You are a happy person if you are following a unified way of testing the Docker images. It will help you seamlessly integrate feature-branch in the upstream or master branches of the git repository. The only thing to be taken care of is to maintain the sequence of integration and delivery. If there are no releases, how would you prevent the “race condition” on one system with several parallel feature branches?

So what is the solution? The process should start when there is no competition.

The steps:

(1) Update the feature-branch to upstream.

(2) Build images.

(3) Test the built images.

(4) Start and wait until step-2 is completed and the images are delivered.

(5) If step-4 fails, roll back the ecosystem to the previous step.

(6) Combine feature-branch in upstream. Send it to the repository.

If any of the above steps fail, the delivery process immediately terminates. The task is returned to the developer until it isn’t solved. The same process can work with more than one repository. Each of the steps should be done in a similar way, but for each repository. For example, step 1 for repository A and step for repository B, and so on.

Kubernetes gives you the freedom to roll out updates in parts. Numerous AB tests can roll out and undergo risk analysis. Kubernetes internally separates services and applications.

 

The rollback systems

Out of some of the most important abilities of a strong architectural framework, one is the ability to rollback. There are a number of explicit and implicit nuances. Let us have a look at them:

(1) The service should be able to set up its environment and also rollback.

(2) If rollback isn’t possible, the service should be polymorphic. It should support both the old and the new versions of the code.

(3) There should be a backward compatibility for any service after the rollback.

In the Kubernetes cluster, it is easy to rollback states. But it will only work if your meta-project contains information of this snapshot. Complex delivery rollback algorithms are discouraged but are necessary.

So under what circumstances should the rollback mechanism be triggered?

(1) When the percentage of application errors is high after the release.

(2) Once you start receiving signals from key monitoring points.

(3) When smoke tests fail.

(4) It can be done manually.

 

Security Measures

Making the ecosystem bulletproof isn’t easy. It can’t be done by just following a single workflow. The architectural framework should be secure enough to tackle any issues. Kubernetes comes with some good built-in mechanisms for access control, network policies, the audit of events, etc. There are some tools for the information security and they have proved to be excellent in terms of protection.

 

The next step, from development workflows to the architecture

An architectural framework that offers flexibility, scalability, availability, reliability, protection against threats, etc is almost mandatory. This is a crucial thing. In fact, this need led to a new concept. Any guesses? Yes, DevOps. It led to the complete automation and optimization infrastructure concept.

Now, it was time to change the architecture from Monolithic to Microservices. It offers immense benefits of a service-oriented architecture. At least for Docker and Kubernetes, it is ideologically wrong to use a monolithic architecture. I will certainly discuss some points of the microservices architecture. For in-depth knowledge on DevOps, please read this article on DevOps.

Now, we will quickly discuss the main critical components and the solutions of a good architecture.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

The Critical Components

 

Number 1- Identity Service:

Identity microservice refers to the “identity as a microservice”. The services are lightweight. It provides modularity and enables flexibility to the application. The identity microservice is powerful and has the access to all the profile data. It is capable of providing everything that is needed at the core of all the applications.

If you want to be the client of major enterprise platforms like IBM, Google, Microsoft, etc, access will be handled by the vendor’s services. But what if you want your own solution? The list in the upcoming section will help you decide it.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

 

Number 2- The automated service provisioning:

The services built are independent of each other. This leads to easy development and fewer errors. It also helps in searching other services in a dynamic and automated pattern.

Kubernetes reduces the need for additional components. But still, one needs to automate the addition of new machines. Here is a list of tools :

(1) KOPS – Helps you install a cluster on AWS or GCE.

(2) Teraform – Helps you manage the infrastructure for any environment.

(3) Ansible — Offers automation of any kind.

I personally recommend Ansible because it helps you work with both servers as well as Kubernetes objects.

 

Number 3- Git repository and task tracking:

With Git repositories, the tasks can be dealt with ease. The basic idea is to have a small repository. It serves as an environment tracker. The content includes what type of versions to be used for various services. The preferred source control system for this is “git”.

There should be a proper working place for teamwork and code storage for all important discussions. If you want a free service, go for Redmine. Else, Jira is a paid service and is quite useful. For the code repository, Gerrit is a great choice, for free!

 

Number 4- The Docker Registry:

The docker registry is a type of storage and content delivery system. It holds the name of Docker images and is available in different tagged versions. To interact with a registry, the user should fire push and pull commands.

The docker image management system is quite important. The system should also support access for users and a group of users. For this, choose a cloud solution or some private hosted service. A good option is Vmware Harbor.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

 

Number 5- CI/CD and service delivery:

Only continuous integration and delivery service connect the components discussed earlier. Continuous delivery refers to the service being simple and deprived of any logic. The CI/CD service should only react to the outside world events like changes in the git repository, etc.

The integration service is responsible for automatic service testing, service delivery, rollback, service removal, and image building.

 

Number 6- Log collection and analysis:

In any microservice application, tracking the problem is important. Tracking is possible with the help of logging. So logging and monitoring give you a holistic view of the system.

Logs are made accessible by writing them to STDOUT or STDERR of the root process. The logs data should be available when needed. It should also contain records from the past.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

Number 7- Tracing, Monitoring, and Alert:

Tools like Opentracing and Zipkin help you understand the mistake. Like where did you go wrong? These tools will help you answer such questions. Failures do happen and tracing them is important.

Further, monitoring is divided into three levels. These are the physical level, cluster level, and service level. There is no scope for errors in monitoring. Tools like Prometheus and OpsGenie have proved to be quite helpful for monitoring. OpsGenie also alerts and notifies for issues at all the levels. So tracing, monitoring, and alerts should never be taken lightly. They are the defensive part of the application.

 

Number 8- API Gateway with Single Sign-on:

In microservices, all the calls should go through the API Gateway only. It helps maintain the security. It is also responsible for routing the API requests. So the API Gateway is an entry point for all its respective microservices. Single sign-on refers to a session. It is some kind of the user authentication service. As per the name, the login credentials are set once or a single time. It can then be used for accessing multiple applications.

One needs a reliable service for handling tasks like authorization, authentication, user registration, single sign-on, etc. The service integrates with the API Gateway and everything is handled through it.

 

Number 9- The event bus:

If the ecosystem has hundreds of services, they need to be dealt with care. Interservice communication is a must and there is no scope for error. The data flow should be streamlined. An event bus refers to a well-directed flow of events from one microservice to the others.

Level Up Is Docker and Kubernetes offering the best architecture? Article Image

 

Number 10- Databases and stateful services:

In a microservices based application, usually, there are numerous services. The data storage requirements for all is different, as per the service roles. So some services are good with a relational database and others might need a NoSQL database like MongoDB.

Docker has changed the rules of the game. Database occupies the central space of importance in the storage world. So whatever is the solution, it should be able to work in a Kubernetes environment with ease.

 

Back to reality, from architecture to real life

I will be quite honest with you in sharing my views. I believe that in the future, the entire architectures will be considered as failures. The design principles, the fundamentals, etc everything is changing! But you need to stay on top of the game. For this, integrate into the professional community. Sooner or later, you will have to adapt to these changes. Then why not begin now itself?

There are many opportunities but only if you update yourself with the new technological updates.

Now coming back to the title of this article. Docker and Kubernetes have the best architecture? For now, certainly “Yes”. But this might only be the best architecture for the time being. Strive for more, strive for building a better architecture, better than the best!

I am sharing a few useful links with all of you. 

Docker Article: Docker Tutorial: Containers, VMs, and Docker for Beginners

Docker Video Tutorial: Docker Video Tutorial Series

Kubernetes Article: The Kubernetes Bible for Beginners & Developers

Kubernetes Video Tutorial: Kubernetes Video Tutorial Series

James Lee
James Lee
James Lee is a passionate software wizard working at one of the top Silicon Valley-based startups specializing in big data analysis. In the past, he has worked on big companies such as Google and Amazon In his day job, he works with big data technologies such as Cassandra and ElasticSearch, and he is an absolute Docker technology geek and IntelliJ IDEA lover with strong focus on efficiency and simplicity.

5 Comments

  1. John Brown says:

    Aha, thats a pretty nice article. I love it. Yes, seems like it is time to upgrade to Microservices. Cheers :).

  2. Aman Sanghera says:

    Wonderful article, solves some of my major doubts. Some of the examples mentioned in this article are pretty helpful in understanding the concepts. Thanks a lot James.

  3. David Rodriguez says:

    Good article. I like the simplicity and explanation. I read this article twice.

  4. venkat says:

    Thanks a lot. it’s really helpful

  5. This is an amazing. thank you

Leave a Reply

Your email address will not be published.

LEARN HOW TO GET STARTED WITH DEVOPS

get free access to this free guide, downloaded over 200,00 times !

You have Successfully Subscribed!