Source: Cloud Native Computing Foundation

Kubernetes and its Use-Cases

Dipaditya Das
9 min readMar 12, 2021

We all know how important Containers have become in today’s fast-moving IT world. Pretty much every big organization has moved out of its traditional approach of using virtual machines and started using Containers for deployment. They are looking for trained Kubernetes professionals who have in-depth knowledge about containerization and orchestration tools. So, it’s high time you understand what is Kubernetes.

🚀 Following are the topics covered in this blog:

  1. What is Kubernetes?
  2. Kubernetes Components & Architecture
  3. Rise of Kubernetes
  4. Why use Kubernetes?
  5. Feature of Kubernetes
  6. Top 3 Industry Use-Case Study

What is Kubernetes?

Kubernetes is an open-source orchestration tool developed by Google for managing microservices or containerized applications across a distributed cluster of nodes. Kubernetes provides a highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication, and scaling of containers on the basis of CPU usage).

Source: RedHat

The main objective of Kubernetes is to hide the complexity of managing a fleet of containers by providing REST APIs for the required functionalities. Kubernetes is portable in nature, meaning it can run on various public or private cloud platforms such as AWS, Azure, OpenStack, or Apache Mesos. It can also run on bare metal machines.

Kubernetes Components and Architecture

Kubernetes follows a client-server architecture. It’s possible to have a multi-master setup (for high availability), but by default, there is a single master server that acts as a controlling node and point of contact. The master server consists of various components including a Kube-apiserver, an etcd storage, a Kube-controller-manager, a cloud-controller-manager, a Kube-scheduler, and a DNS server for Kubernetes services. Node components include kubelet and Kube-proxy on top of Docker.

How do containers ensure high availability, disaster recovery, or scalability?

Container orchestration systems such as Kubernetes (nicknamed K8S) offer a solution. The systems are responsible for handling one or multiple clusters of machines and detect the availability of each image running on it. The size of the clusters can range from three machines to more than thousands of machines and containers, distributed among different cloud providers if needed. If a machine breaks down, the tool should be able to shift its containers to another node while keeping the entire cluster operational.

The Rise of Kubernetes

First released in 2014, Kubernetes is an open-source container orchestration tool that can automatically scale, distribute and manage fault-tolerance on containers. Originally created by Google and then donated to Cloud Native Computing Foundation, Kubernetes is widely used in production environments to handle Docker containers and other container tools in a fault-tolerant manner. As an open-source product, it is available on various platforms and systems. Google Cloud (GKE), Microsoft Azure (AKS), and Amazon AWS (EKS) offer official support for Kubernetes, so configuration changes to the cluster itself are not necessary.

The popularity of Kubernetes has steadily increased, with more than four major releases in 2017. K8s also was the most discussed project in GitHub during 2017 and was the project with the second most reviews.

Why use Kubernetes?

Here, are the pros/benefits of using Kubernetes.

  • Kubernetes can run on-premises bare metal, OpenStack, public clouds Google, Azure, AWS, etc.
  • Helps you to avoid vendor lock issues as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction, e.g., load balancer and storage.
  • Containerization using Kubernetes allows package software to serve these goals. It will enable applications that need to be released and updated without any downtime.
  • Kubernetes allows you to assure those containerized applications run where and when you want and helps you to find resources and tools in which you want to work.
Source: Cloud Native Computing Foundation(CNCF)

Features of Kubernetes

Here are the essential features of Kubernetes:

  • Automated Scheduling
  • Self-Healing Capabilities
  • Automated rollouts & rollback
  • Horizontal Scaling & Load Balancing
  • Offers environment consistency for development, testing, and production
  • Infrastructure is loosely coupled to each component can act as a separate unit
  • Provides a higher density of resource utilization
  • Offers enterprise-ready features
  • Application-centric management
  • Auto-scalable infrastructure
  • You can create predictable infrastructure

🎯 INDUSTRY USE CASES STUDY 🎯

✅ CASE STUDY: U.S. Department of Defense

With Kubernetes, the U.S. Department of Defense is enabling DevSecOps on F-16s and battleships

Challenge 👨‍💻

In the recent past, software delivery within the U.S. Department of Defense could take anywhere from three to ten years for big weapons systems. “It was mostly teams using waterfall, no minimum viable product, no incremental delivery, and no feedback loop from end-users,” says Nicolas M. Chaillan, Chief Software Officer of the U.S. Air Force. “Particularly when it comes to AI, machine learning, and cybersecurity, everyone realized we have to move faster.”

Solution 💡

Chaillan and Peter Ranks, Deputy Chief Information Officer for Information Enterprise, DoD CIO, created the DoD Enterprise DevSecOps reference design, with a mandate to use CNCF-compliant Kubernetes clusters and other open source technologies across the DoD.

Impact ⚡

Releases, which once took as long as 3 to 8 months, now can be achieved in one week. An authority to operate (ATO) for a cloud enclave can be obtained within one week, plus “we have a continuous ATO on the platform stack,” says Chaillan. “Anytime it’s going to pass the gates, the software is automatically accredited. So you can push software multiple times a day.” All told, “we’re thinking with the 37 programs, it’s going to be a 100+ year saved off planned program time,” he adds.

“The DoD Enterprise DevSecOps reference design defines the gates on the DevSecOps pipeline. As long as teams are compliant with that reference design, they can get a DoD-wide continuous ATO (authority to operate).”

— NICOLAS M. CHAILLAN, CHIEF SOFTWARE OFFICER, U.S. AIR FORCE

✅ CASE STUDY: BOSE

Bose: Supporting Rapid Development for Millions of IoT Products With Kubernetes

Challenge 👨‍💻

A household name in high-quality audio equipment, Bose has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. “We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: “To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan O’Mahony.

Solution 💡

From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt Kubernetes for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including Fluentd, CoreDNS, Jaeger, and OpenTracing.

Impact ⚡

With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. “We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing, and so forth, in less than two and a half weeks,” says O’Mahony.

“At Bose we’re building an IoT platform that has enabled our physical products. If it weren’t for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.”

— JOSH WEST, LEAD CLOUD ENGINEER, BOSE

✅ CASE STUDY: CERN

CERN: Processing Petabytes of Data More Efficiently with Kubernetes

Challenge 👨‍💻

At CERN, the European Organization for Nuclear Research, physicists conduct experiments to learn about fundamental science. In its particle accelerators, “we accelerate protons to very high energy, close to the speed of light, and we make the two beams of protons collide,” says CERN Software Engineer Ricardo Rocha. “The end result is a lot of data that we have to process.” CERN currently stores 330 petabytes of data in its data centers, and an upgrade of its accelerators expected in the next few years will drive that number up by 10x. Additionally, the organization experiences extreme peaks in its workloads during periods prior to big conferences and needs its infrastructure to scale to those peaks. “We want to have a more hybrid infrastructure, where we have our on-premise infrastructure but can make use of public clouds temporarily when these peaks come up,” says Rocha. “We’ve been looking to new technologies that can help improve our efficiency in our infrastructure so that we can dedicate more of our resources to the actual processing of the data.”

Solution 💡

CERN’s technology team embraced containerization and cloud-native practices, choosing Kubernetes for orchestration, Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution inside the clusters. Kubernetes federation has allowed the organization to run some production workloads both on-premise and in public clouds.

Impact ⚡

“Kubernetes gives us the full automation of the application,” says Rocha. “It comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments.” The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes. Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes. Initially, virtualization gave 20% overhead, but with tuning, this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.

“Kubernetes is something we can relate to very much because it’s naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure.”

— RICARDO ROCHA, SOFTWARE ENGINEER, CERN

🚀 Summary 🚀

Kubernetes is an orchestration tool for managing distributed services or containerized applications across a distributed cluster of nodes. It was designed for natively supporting (auto-)scaling, high availability, security, and portability. Kubernetes itself follows a client-server architecture, with a master node composed of etcd cluster, Kube-apiserver, Kube-controller-manager, cloud-controller-manager, scheduler. Client (worker) nodes are composed of Kube-proxy and kubelet components. Core concepts in Kubernetes include pods (a group of containers deployed together), services (a group of logical pods with a stable IP address), and deployments (a definition of the desired state for a pod or replica set, acted upon by a controller if the current state differs from the desired state), among others.

✨ Hope It Will Be Beneficial For You…

🙏 Thanks to Vimal Daga Sir for giving me this opportunity to research on real industry use case about Kubernetes.

--

--

Dipaditya Das

IN ● MLOps Engineer ● Linux Administrator ● DevOps and Cloud Architect ● Kubernetes Administrator ● AWS Community Builder ● Google Cloud Facilitator ● Author