KUBERNETES
Kubernetes is a system for deploying applications and more efficiently utilizing the containerized infrastructure that powers the apps. Kubernetes can save organizations money because it takes less manpower to manage IT; it makes apps more resilient and performant.
You can also run Kubernetes on-premises or within public Cloud. AWS, Azure, and GCP offer managed Kubernetes solutions to help customers get started quickly and efficiently operate K8s apps. Kubernetes also makes apps a lot more portable, so IT can move them more easily between different clouds and internal environments.
It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a "Dockershim"; however, the shim has since been deprecated in favor of directly interfacing with containerd or another CRI-compliant runtime.
History
Kubernetes was founded by Joe Beda, Brendan Burns, and Craig McLuckie,] who were quickly joined by other Google engineers including Brian Grant and Tim Hockin, and was first announced by Google in mid-2014. Its development and design are heavily influenced by Google's Borg system, and many of the top contributors to the project previously worked on Borg. The original codename for Kubernetes within Google was Project 7, a reference to the Star Trek ex-Borg character Seven of Nine. The seven spokes on the wheel of the Kubernetes logo are a reference to that codename. The original Borg project was written entirely in C++, but the rewritten Kubernetes system is implemented in Go.
Kubernetes v1.0 was released on July 21, 2015. Along with the Kubernetes v1.0 release, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF) and offered Kubernetes as a seed technology. In February 2016 Helm package manager for Kubernetes was released. On March 6, 2018, Kubernetes Project reached ninth place in commits at GitHub, and second place in authors and issues, after the Linux kernel.
Is Kubernetes getting adopted in enterprises?
Yes kubernetes getting adopted in enterprises
Several datapoints show rapid Kubernetes adoption. Sumo Logic’s fourth annual Continuous Intelligence Report on "The State of Modern Applications and DevSecOps in the Cloud" highlights some cool adoption data on Kubernetes within enterprises. The report states that K8s is seeing increased adoption in on-premise as well as cloud-based environments. In fact, 1 in 3 enterprises in AWS cloud today use Kubernetes as their key orchestration solution.
CAPABILITIES OF KUBERNETES
Here are five fundamental business capabilities that Kubernetes can drive in the enterprise–be it large or small. And to add teeth to these use cases, we have identified some real world examples to validate the value that enterprises are getting from their Kubernetes deployments
- Faster time to market
- IT cost optimization
- Improved scalability and availability
- Multi-cloud (and hybrid cloud) flexibility
- Effective migration to the cloud
Let's look at the values in greater detail next.
1. Faster time to market (aka improved app development/deployment efficiencies)
Kubernetes enables a “microservices” approach to building apps. Now you can break up your development team into smaller teams that focus on a single, smaller microservice. These teams are smaller and more agile because each team has a focused function. APIs between these microservices minimize the amount of cross-team communication required to build and deploy. So, ultimately, you can scale multiple small teams of specialized experts who each help support a fleet of thousands of machines.
Kubernetes also allows your IT teams to manage large applications across many containers more efficiently by handling many of the nitty-gritty details of maintaining container-based apps. For example, Kubernetes handles service discovery, helps containers talk to each other, and arranges access to storage from various providers such as AWS and Microsoft Azure.
2. IT cost optimization
Kubernetes can help your business cut infrastructure costs quite drastically if you’re operating at massive scale. Kubernetes makes a container-based architecture feasible by packing together apps optimally using your cloud and hardware investments. Before Kubernetes, administrators often over-provisioned their infrastructure to conservatively handle unexpected spikes, or simply because it was difficult and time consuming to manually scale containerized applications. Kubenetes intelligently schedules and tightly packs containers, taking into account the available resources. It also automatically scales your application to meet business needs, thus freeing up human resources to focus on other productive tasks.
There are many examples of customers who have seen dramatic improvements in cost optimization using K8s like spotify.
3. Improved scalability and availability
The success of today’s applications does not depend only on features, but also on the scalability of the application. After all, if an application cannot scale well, it will be highly non-performant at best scale, and totally unavailable, at the worst case.
As an orchestration system, Kubernetes is a critical management system to “auto-magically” scale and improve app performance. Suppose we have a service which is CPU-intensive and with dynamic user load that changes based on business conditions (for example, an event ticketing app that will see dramatic users and loads prior to the event and low usage at other times). What we need here is a solution that can scale up the app and its infrastructure so that new machines are automatically spawned up as the load increases (more users are buying tickets) and scale it down when the load subsides. Kubernetes offers just that capability by scaling up the application as the CPU usage goes above a defined threshold - for example, 90 percent on the current machine. And when the load reduces, Kubernetes can scale back the application, thus optimizing the infrastructure utilization. The Kubernetes auto-scaling is not limited to just infrastructure metrics; any type of metric--resource utilization metrics - even custom metrics can be used to trigger the scaling process.
4. Multi-cloud flexibility
One of the biggest benefits of Kubernetes and containers is that it helps you realize the promise of hybrid and multi-cloud. Enterprises today are already running multi-cloud environments and will continue to do so in the future. Kubernetes makes it much easier is to run any app on any public cloud service or any combination of public and private clouds. This allows you to put the right workloads on the right cloud and to help you avoid vendor lock-in. And getting the best fit, using the right features, and having the leverage to migrate when it makes sense all help you realize more ROI (short and longer term) from your IT investments.
Need more data to validate the multi-cloud and Kubernetes made-in-heaven story? This finding from the Sumo Logic Continuous Intelligence Report identifies a very interesting upward trend on K8 adoption based on the number of cloud platforms organizations use, with 86 percent of customers on all three using managed or native Kubernetes solutions. Should AWS be worried? Probably not. But, it may be an early sign of a level playing field for Azure and GCP--because apps deployed on K8s can be easily ported across environments (on-premise to cloud or across clouds).
5. Seamless migration to cloud
Whether you are rehosting (lift and shift of the app), replatforming (make some basic changes to the way it runs), or refactoring (the entire app and the services that support it are modified to better suit the new compartmentalized environment), Kubernetes has you covered.
Since K8s runs consistently across all environments, on-premise and clouds like AWS, Azure and GCP, Kubernetes provides a more seamless and prescriptive path to port your application from on-premise to cloud environments.
CASE STUDIES
1. Spotify
Challenge
Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. "Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today—and hopefully the consumers we'll have in the future," says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that "having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community," he says.
Solution
"We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that," says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, "we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools." At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because "Kubernetes fit very nicely as a complement and now as a replacement to Helios," says Chakrabarti.
Impact
The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. "A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we've heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify," says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, "Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes." In addition, with Kubernetes's bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
2. NOKIA
Challenge
Nokia's core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. "As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure," says Gergely Csatari, Senior Open Source Engineer. "There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on VMware Cloud and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself."
Solution
The company decided that moving to cloud native technologies would allow teams to have infrastructure-agnostic behavior in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. "The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes," says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. "Now, all the products are doing some kind of re-architecture work, and they're moving to Kubernetes."
Impact
Kubernetes has enabled Nokia's foray into 5G. "When you develop something that is part of the operator's infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies," says Csatari. The teams using Kubernetes are already seeing clear benefits. "By separating the infrastructure and the application layer, we have less dependencies in the system, which means that it's easier to implement features in the application layer," says Csatari. And because teams can test the exact same binary artifact independently of the target execution environment, "we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal," he adds. As a result, "we save several hundred hours in every release."
3. IBM
Challenge
IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company's enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM's Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
Solution
The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM's trust story, since it makes it possible for users to consume the company's Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM's cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
Impact
IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose."
4. HUEWAI
Challenge
A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
Solution
After deciding to use container technology, Huawei began moving the internal I.T. department's applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.
Impact
"By the end of 2016, Huawei's internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally – and the demand it is seeing externally – the company has also built the technologies into FusionStage™, the PaaS solution it offers its customers.
5. OPEN AI
Challenge
An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.
Solution
OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. "We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster," says Christopher Berner, Head of Infrastructure. "This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
Impact
The company has benefited from greater portability: "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. Being able to use its own data centers when appropriate is "lowering costs and providing us access to hardware that we wouldn't necessarily have access to in the cloud," he adds. "As long as the utilization is high, the costs are much lower there." Launching experiments also takes far less time: "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
6. PINTEREST
Challenge
After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
Solution
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
Impact
"By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
THANKYOU FOR READING!!!