Hello friends, all good? So this week, like every other week, I was thinking about Kubernetes, my favorite! Till date, we have discussed a lot about Kubernetes. Please check the list of articles if you don’t believe me! But there is no end to Kubernetes. We have discussed the open-source system quite openly and extensively! From the amazing features like container platform, microservices platform, portable cloud, etc to the scope of improvement, Kubernetes just can’t be wrapped up easily. But this time, I wanted to shift from the theoretical part to real-time applications. Why? Until and unless we don’t see real-time successful applications, talking big things on Kubernetes is literally useless. If it is amazing, it should be used for developing amazing applications.
There are many Kubernetes case studies like IBM, Ocado Technology, Adform, NAIC (National Association Of Insurance Commissioners), etc. But I am interested in discussing eBay and Pokemon Go. Both of them are interesting, aren’t they? So let us begin!
Most of us know eBay right? Yes, the e-commerce giant! There are many e-commerce giants and most of them are doing pretty well. The reason is that the number of Internet users is increasing every single day. But have you ever thought about how they operate? E-commerce websites are not just meant to keep the customers happy. These websites also need to keep their developers happy! For this, they try something different, something better, and something that is less complicated. With such an intent, eBay created or rather developed a framework for deploying containers on the OpenStack cloud.
eBay was founded after Amazon. To be more precise, it was founded a year after Amazon. So the trend of online marketing and shopping had just begun. As per the statistics, there are more than 175 million active eBay users in the second quarter of 2018. The first quarter had 171 million active users. The statistics itself shows that the number of eBay users are increasing, almost every single day!
So eBay is not a small thing, it is huge and ever growing. The product listings on eBay are also over 800 million. With such huge numbers, eBay or any other such company needs to embrace new technologies, in advance. Even if you compare eBay with small and conservative enterprises, the adaptation of new technology is on a large scale and with a higher risk. It is not just the huge numbers, even the infrastructure forces these companies to update.
A couple of years back, eBay was enthusiastically using OpenStack for managing the cloud infrastructure. It was a shift of the virtualized machines from VMware’s ESXi hypervisor to KVM running on OpenStack. I must tell you that eBay has its own OpenStack implementation. So it is customized to a great extent. In the initial stages, there were around 300 servers running the “Essex” release of OpenStack. Similarly, after three years, eBay and PayPal were running the “Havana” release. But it was on a much larger scale, with over 300,000 cores supporting more than 12,000 KVM hypervisors with Open vSwitch virtual switches.
Next, eBay focuses on the front end of the Kubernetes container scheduling system. This is open sourced by Google and eBay wanted to interleave with the OpenStack clouds for managing the containerized applications.
eBay’s cloud journey began like any other company. But they always focused on keeping their developers happy. With the emergence of Docker, it was pretty much clear that developers were and are in love with the container technology.
If we compare eBay with other IT giants like Google, Amazon, Microsoft, and Facebook, it’s scale might not look big. But still, eBay’s infrastructure is large and is also taken seriously. This includes containerized data centers that are powered by fuel cells. These data centers use hyper-scale class machines from companies like Dell and Hewlett Packard. On top of that, these machines are customized for running the Intel Xeon processors. Every company has different perspectives and working patterns. At eBay, they like to run the processors a little hot for getting more performance per system.
Kubernetes events are simply amazing and should be attended. So during the KubeCon 2015 conference, Ashwin Raveendran (senior member, eBay cloud technical services team) discussed how the company wants to augment its OpenStack cloud with the Kubernetes container scheduler. Now, this is way back in 2015, they must have achieved it by now! Why did I mention it from the year 2015? The reason is that it was one of the first public examples of a hyper-scaler that was committing to the fusion of OpenStack and Kubernetes.
The usual availability zone at eBay consists of 5,000 to 20,000 servers which are podded into 500 ++ node chunks. The core count of eBay has witnessed a linear growth. While the number of virtual machines and projects witnessed an exponential graph in 2013. In terms of storage, eBay has more than 200 PB capacity. It runs on a chunk installed in some server farm. This is what I assume! An approximate 120 PB of the storage is used for supporting Hadoop and is one of the largest analytics setups in the world.
eBay processes billions of queries and serves more than 20 billion images per day. All this is used for the auctioning and retailing application. So it is not an easy operation, it is complicated.
The main intention of using OpenStack is to allow developers to self-service their development needs. But then onwards, containers overtook virtual machines to manage code with OpenStack. Then, eBay settled with Kubernetes as a management solution and also developed Tess Master. It is a replacement for Magnum (OpenStack’s container management tool). eBay has its own custom build of Kubernetes, it is known as the Tess.io. Kubernetes is a revitalizing ingredient for OpenStack and even for other independent developments in the OpenStack world.
Overall, the mixture of OpenStack and eBay has yielded good results till date. OpenStack makes optimum use of Docker and Kubernetes. But, one should also understand that the most engineered IT organizations can bend the OpenStack to their will and they also favor other solutions for the convenience of their developers.
On 6th July 2016, the augmented reality game “Pokemon Go” was launched by Niantic for Android and iOS devices. This game was initially launched in some countries and then to the other countries. It is a huge success and to such an extent that the countries where it wasn’t launched, people started downloading it through proxies and other inorganic sources. Pokemon Go and Kubernetes share a special bond. Let us know something about it.
I am sharing a few images that will help you understand the success of Pokemon Go.
Experts started analyzing this game in different ways. For example, have a look at the image below:
So what is the infrastructure that helped Pokemon Go get a green signal for success? Let us understand it.
Niantic and Google share a special bond. Why? The reason is that Niantic was Google CRE’s first customer and the first project was Pokemon Go. This bond is also critical because it proved that tech giants could come up together and develop something pretty amazing.
After 15 minutes of launching Pokemon Go in Australia and New Zealand, the player traffic had already surpassed the expectations of Niantic and Google. This launch was also a confidence boost up for Niantic. In fact, Niantic had to call Google CRE for the reinforcements. Next day, the application was being launched in the US. The story was the same in the US. Niantic and Google went short of Pokemon trainers for CRE, SRE, development, product, support, and executive teams. Pokemon Go was literally flooding everywhere. From the social media to local newspapers, Pokemon Go simply went viral.
Understand that Pokemon Go is a mobile application that uses many services. These services are accessed from the Google cloud. As a result, the Cloud Datastore became a direct proxy for the game’s popularity. The primary source or the primary database for the game is Google Cloud. As per the popularity statistics, I would say that the developers targeted 1X player traffic. In the worst case popularity scenario, the traffic shall increase to 5X. But this was a rough estimate. In real-time, it went to almost 50X! This is ten-times the worst case estimate. While this was a good news in terms of a successful application launch, on the other side, it was a challenge for the developer teams. To balance the demand, Google CRE provisioned extra capacity on behalf of Niantic.
Everything wasn’t smooth during and after the launch. The game had to be stable and Google, as well as Niantic, dealt with difficult situations in sequence. They quickly created and deployed solutions. Every part of the architecture was reviewed by Google’s core cloud engineers and product managers. Please note, all this was done on the backend while millions of players were playing as well as joining the game! I wonder how it felt working on the backend while millions were in action.
Till date, Pokemon Go is considered as one of the finest examples of container-based development in the wild. The application logic runs on GKE or the Google Container Engine. It is powered by the open source Kubernetes project. So why GKE was chosen by Niantic? The reason is that GKE’s ability to orchestrate container cluster is on the planetary scale. Niantic made the perfect decision because GKE freed Niantic’s team in certain aspects. So Niantic could concentrate more on deploying the live changes for existing players. So Niantic used Google Cloud for making Pokemon Go reach millions of players, while it was continuously improving.
Experiencing the success and high-traffic of Pokemon Go, the companies, Google decided to upgrade the version of GKE. The new version offered more than a thousand new additional nodes that could be conveniently added to the container cluster. It is also believed that this was a preparation before launching Pokemon Go in Japan. Both companies estimated that Japan alone will break all the records. The upgrade was carefully done considering the existing players. It never affected them, even while millions of players were signing up! Other than all this, the engineers of both the companies also replaced the Network Load Balancer. The new Load Balancer is more sophisticated HTTP/S Load Balancer. It provides a better control over the traffic and also facilitates with quick connections to the users. Overall, all this helped gain a higher throughput.
So did these strategies help out? Yes, if you notice the whole process, it is like real-time. Experience is the best teacher. Both the companies experienced issues while the US launch. Their initial calculation proved wrong. But they learned from it. So before the Japan launch, the capacity was made generous, the architectural swap to latest version was done, and the upgrade to HTTP/S was completed. So the game was now ready to be launched in Japan. The result came out pretty soon. The game didn’t have any issue. The users already tripled as compared to the US launch. But yet, they didn’t face any issues.
Real world examples are always inspiring and convincing. The reason is that we see the change in front of us. Yes, Kubernetes has made a huge difference to these companies. But these are only two examples. Kubernetes has made a difference to numerous companies. My only intention is to draw your attention to Kubernetes. These companies have benefited a lot from Kubernetes, so why don’t you? Learn it, implement it, and experience the change!
Here is the link to begin with Kubernetes: