How Kubernetes is used in industries & it’s use cases
Objectives:-
In this article , You will learn about the kubernetes and what are the challenges solved by kubernetes.
Kubernetes is an container management tool. So first understand about the container tool docker.
Docker
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
In simple word , docker is used to launch an operating system in just 1 second . Otherwise if you use virtualization , then in the installation of operating system , it takes 30-40 minutes.
- The Docker adoption is still growing exponentially as more and more companies have started using it in production. It is important to use an orchestration platform to scale and manage your containers.
Challenge in Docker
Imagine a situation where you have been using Docker for a little while, and have deployed on a few different servers. Your application starts getting massive traffic, and you need to scale up fast; how will you go from 3 servers to 40 servers that you may require? And how will you decide which container should go where? How would you monitor all these containers and make sure they are restarted if they die? This is where Kubernetes comes in.
Kubernetes:-
- Kubernetes is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
- It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
- Google generates more than 2 billion container deployments a week, all powered by its internal platform, Borg. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.
- The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs).
How to coonect K8s Cluster
To contact the kubernetes cluster , we need kubernetes client Kubectl. By the help of kubectl we gives instruction to the kubernetes cluster.
There are two ways to contact the kubernetes cluster
- Kubectl command
- YAML code
Kubectl command does not provides all resources . So YAML is more powerful and provides all the resources of kubernetes.
Kubernetes Architecture
Kubernetes has a decentralized architecture that does not handle tasks sequentially. It functions based on a declarative model and implements the concept of a ‘desired state.’ These steps illustrate the basic Kubernetes process:
- An administrator creates and places the desired state of an application into a manifest file.
- The file is provided to the Kubernetes API Server using a CLI or UI. Kubernetes’ default command-line tool is called kubectl. In case you need a comprehensive list of kubectl commands, check out our Kubectl Cheat Sheet.
- Kubernetes stores the file (an application’s desired state) in a database called the Key-Value Store (etcd).
- Kubernetes then implements the desired state on all the relevant applications within the cluster.
- Kubernetes continuously monitors the elements of the cluster to make sure the current state of the application does not vary from the desired state.
We will now explore the individual components of a standard Kubernetes cluster to understand the process in greater detail.
A. What is Master Node in Kubernetes Architecture?
- The Kubernetes Master (Master Node) receives input from a CLI (Command-Line Interface) or UI (User Interface) via an API. These are the commands you provide to Kubernetes.
- You define pods, replica sets, and services that you want Kubernetes to maintain. For example, which container image to use, which ports to expose, and how many pod replicas to run.
- You also provide the parameters of the desired state for the application(s) running in that cluster.
API Server
The API Server is the front-end of the control plane and the only component in the control plane that we interact with directly. Internal system components, as well as external user components, all communicate via the same API.
Key-Value Store (etcd)
The Key-Value Store, also called etcd, is a database Kubernetes uses to back-up all cluster data. It stores the entire configuration and state of the cluster. The Master node queries etcd to retrieve parameters for the state of the nodes, pods, and containers.
Controller
The role of the Controller is to obtain the desired state from the API Server. It checks the current state of the nodes it is tasked to control, and determines if there are any differences, and resolves them, if any.
Scheduler
A Scheduler watches for new requests coming from the API Server and assigns them to healthy nodes. It ranks the quality of the nodes and deploys pods to the best-suited node. If there are no suitable nodes, the pods are put in a pending state until such a node appears.
B. What is Worker Node in Kubernetes Architecture?
Worker nodes listen to the API Server for new work assignments; they execute the work assignments and then report the results back to the Kubernetes Master node.
Kubelet
The kubelet runs on every node in the cluster. It is the principal Kubernetes agent. By installing kubelet, the node’s CPU, RAM, and storage become part of the broader cluster. It watches for tasks sent from the API Server, executes the task, and reports back to the Master. It also monitors pods and reports back to the control panel if a pod is not fully functional. Based on that information, the Master can then decide how to allocate tasks and resources to reach the desired state.
Container Runtime
The container runtime pulls images from a container image registry and starts and stops containers. A 3rd party software or plugin, such as Docker, usually performs this function.
Kube-proxy
The kube-proxy makes sure that each node gets its IP address, implements local iptables and rules to handle routing and traffic load-balancing.
Pod
- A pod is the smallest element of scheduling in Kubernetes. Without it, a container cannot be part of a cluster. If you need to scale your app, you can only do so by adding or removing pods.
- The pod serves as a ‘wrapper’ for a single container with the application code. Based on the availability of resources, the Master schedules the pod on a specific node and coordinates with the container runtime to launch the container.
- In instances where pods unexpectedly fail to perform their tasks, Kubernetes does not attempt to fix them. Instead, it creates and starts a new pod in its place. This new pod is a replica, except for the DNS and IP address. This feature has had a profound impact on how developers design applications.
- Due to the flexible nature of Kubernetes architecture, applications no longer need to be tied to a particular instance of a pod. Instead, applications need to be designed so that an entirely new pod, created anywhere within the cluster, can seamlessly take its place. To assist with this process, Kubernetes uses services.
Kubernetes Services
- Pods are not constant. One of the best features Kubernetes offers is that non-functioning pods get replaced by new ones automatically.
- However, these new pods have a different set of IPs. It can lead to processing issues, and IP churn as the IPs no longer match. If left unattended, this property would make pods highly unreliable.
- Services are introduced to provide reliable networking by bringing stable IP addresses and DNS names to the unstable world of pods.
- By controlling traffic coming and going to the pod, a Kubernetes service provides a stable networking endpoint – a fixed IP, DNS, and port. Through a service, any pod can be added or removed without the fear that basic network information would change in any way.
How Do Kubernetes Services Work?
- Pods are associated with services through key-value pairs called labels and selectors. A service automatically discovers a new pod with labels that match the selector.
- This process seamlessly adds new pods to the service, and at the same time, removes terminated pods from the cluster.
For example, if the desired state includes three replicas of a pod and a node running one replica fails, the current state is reduced to two pods. Kubernetes observers that the desired state is three pods. It then schedules one new replica to take the place of the failed pod and assigns it to another node in the cluster.
- The same would apply when updating or scaling the application by adding or removing pods. Once we update the desired state, Kubernetes notices the discrepancy and adds or removes pods to match the manifest file. The Kubernetes control panel records, implements, and runs background reconciliation loops that continuously check to see if the environment matches user-defined requirements.
What is Container Deployment?
To fully understand how and what Kubernetes orchestrates, we need to explore the concept of container deployment.
a. Traditional Deployment
Initially, developers deployed applications on individual physical servers. This type of deployment posed several challenges. The sharing of physical resources meant that one application could take up most of the processing power, limiting the performance of other applications on the same machine.
- It takes a long time to expand hardware capacity, which in turn increases costs. To resolve hardware limitations, organizations began virtualizing physical machines.
b .Virtualized Deployment
Virtualized deployment allows you to create isolated virtual environments, Virtual Machines (VM), on a single physical server. This solution isolates applications within a VM, limits the use of resources, and increases security. An application can no longer freely access the information processed by another application.
- Virtualized deployments allow you to scale quickly and spread the resources of a single physical server, update at will, and keep hardware costs in check. Each VM has its operating system and can run all necessary systems on top of the virtualized hardware.
c. Container Deployment
Container Deployment is the next step in the drive to create a more flexible and efficient model. Much like VMs, containers have individual memory, system files, and processing space. However, strict isolation is no longer a limiting factor.
- Multiple applications can now share the same underlying operating system. This feature makes containers much more efficient than full-blown VMs. They are portable across clouds, different devices, and almost any OS distribution.
- The container structure also allows for applications to run as smaller, independent parts. These parts can then be deployed and managed dynamically on multiple machines. The elaborate structure and the segmentation of tasks are too complex to manage manually.
- An automation solution, such as Kubernetes, is required to effectively manage all the moving parts involved in this process.
Why do you need Kubernetes?
- Kubernetes can help you deliver and manage containerized, legacy, and cloud-native apps, as well as those being refactored into microservices.
- In order to meet changing business needs, your development team needs to be able to rapidly build new applications and services. Cloud-native development starts with microservices in containers, which enables faster development and makes it easier to transform and optimize existing applications.
- Production apps span multiple containers, and those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads.
- Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time. With Kubernetes you can take effective steps toward better IT security.
- Kubernetes also needs to integrate with networking, storage, security, telemetry, and other services to provide a comprehensive container infrastructure.
- Linux containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
- This significantly multiplies the number of containers in your environment, and as those containers accumulate, the complexity also grows.
- Kubernetes fixes a lot of common problems with container proliferation by sorting containers together into “pods.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services — like networking and storage — to those containers.
- With the right implementation of Kubernetes — and with the help of other open source projects like Open vSwitch, OAuth, and SELinux — you can orchestrate all parts of your container infrastructure.
- Other parts of Kubernetes help you balance loads across these pods and ensure you have the right number of containers running to support your workloads.
Uses of K8s in Industry
1. Administration disclosure and load balancing
Kubernetes can uncover a holder utilizing the DNS name or utilizing their own IP address. In the event that traffic to a compartment is high, Kubernetes can stack adjust and convey the organization traffic so the sending is steady.
2. Capacity organization
Kubernetes permits you to naturally mount a capacity arrangement of your decision, for example, nearby stockpiles, public cloud suppliers, and that's only the tip of the iceberg.
3. Mechanized rollouts and rollbacks
You can depict the ideal state for your conveyed compartments utilizing Kubernetes, and it can change the real state to the ideal state at a controlled rate. For example, computerize Kubernetes to make new holders for your arrangement, eliminate existing compartments, and embrace every one of their assets to the new holder.
4. Programmed receptacle pressing
You give Kubernetes a group of hubs that it can use to run containerized assignments. You disclose to Kubernetes the amount of and memory (RAM) every compartment needs. Kubernetes can fit compartments onto your hubs to utilize your assets.
5. Secret and arrangement the board
- Kubernetes allows you to store and oversee touchy data, for example, passwords, OAuth tokens, and SSH keys. You can convey and refresh privileged insights and application design without revamping your holder pictures, and without uncovering insider facts in your stack arrangement
6. Self-mending
Kubernetes restarts holders that fizzle, replaces compartments, slaughters compartments that don't react to your client characterized wellbeing check, and doesn't publicize them to customers until they are prepared to serve.
Conclusion
Kubernetes operates using a very simple model. We input how we would like our system to function – Kubernetes compares the desired state to the current state within a cluster. Its service then works to align the two states and achieve and maintain the desired state.