Linkerd - The modern-day Service Mesh (Part 1/5)
Thanks to Naveen Verma sir for motivating me to study Service Mesh,he did an amazing session at Azure Developer Community and had a chance to learn a lot about the subject of Service mesh.
Here is the session:
Applications may be made more visible, dependable, and secure with the use of a service mesh. Instead of the application layer, these features are offered at the platform layer. A service mesh is often implemented as a collection of sidecar-shaped network proxies, with application code controlling their operation.
Organizations may utilize a service mesh to swiftly respond to consumer needs and competitive threats in addition to encrypting communication, establishing zero-trust networks, and successfully verifying identity. Every firm must have reliable application performance.
Let's Start from the Stone Age -> I am not talking from mainframes but let's talk monoliths.
(Most of my mentors came from Monolith Era😂😂😂😂)
Before the world had to deal with complex monoliths
A software program constructed as a cohesive entity that is independent of other programs and self-contained is referred to as having a monolithic architecture. A monolith architecture for software design isn't far from what the name "monolith" is frequently associated with: something enormous and glacial.
Monoliths are simpler to deploy because just one jar or war file must be uploaded. comparatively simpler and easier to create than microservices architecture. Compared to a microservices design, network latency and security issues are comparatively less of a concern.
The construction and deployment of mechanisms in monoliths are more difficult and time-consuming due to their large size and increased dependencies. Microservices are scalable and less resource-sensitive. It is simpler to construct and deploy since the modules are not connected.
Thus, Microservices came up with improved scalability: Because they are granular
Each component of the microservices method may be scaled individually, which is an additional benefit. In comparison to monoliths, where the entire program must be scaled even when there is no need for it, the entire process is therefore more time- and cost-efficient.
What are Microservices?
Microservices are a set of code, or the Software is created using an architectural and organizational strategy known as microservices, which consists of tiny, autonomous services that interact over well-defined APIs. Owners of these services are discrete, tiny teams.
This is how DevOps engineers need to Architect
Databases (DB) is now being set from a different location to the microservices
Advantages of Microservices
In Summary
Cons of Microservices Architecture
What are the Advantages that Service Meshes Bring to us
A service mesh's benefits
A sidecar that proxies all of the application's network traffic are injected into your application pods by a service mesh to perform its function. For each request, the sidecar will collect data like latency, request rate, success rate, status code, host, and destination services. Additionally, to provide end-to-end encryption inside your cluster and avoid network snooping, the sidecars will establish encrypted TLS connections with one another. The sidecars bring more functionality to intelligent routing between your services, to sum it up. This may involve employing circuit breakers, retries, timeouts, load-balancing, canary deployments, and blue-green deployments.
Recommended by LinkedIn
Our requirement for a service mesh was principally motivated by the requirement for better monitoring. Although they are good to have, mutual ties and intelligent routing are not our main concerns. We also wanted a service mesh that was easy to use and required little maintenance.
We dug down deep in the resources and found out Linkerd was a great tool and saw this video and this changed the way I thought of Service Mesh
A command-line program is available from Linkerd to set up and install Linkerd. In no time, I had set up a test cluster using linkerd, installed the sample "books app," instrumented route monitoring, linkerd the metrics to our Prometheus server, and used their dashboard application to get visibility into the performance of the applications. The Linkerd website's material was fairly simple to understand, and I had little trouble setting up a minimally functional version.
Once the demo version of Linkerd had been deployed successfully, we put it on our test Kubernetes cluster and gradually added proxies to each namespace.
So, what is Linkerd?
Linkerd is an open-source service mesh that was created to be deployed into several container schedulers and frameworks, including Kubernetes. It is regarded as the lightest and quickest service mesh. Without even a single line of code change, it offers observability, dependability, and vital security for Kubernetes applications. It effortlessly integrates with Kubernetes and is capable of comfortably handling thousands of requests per second. Many firms, including PayPal, Expedia, Hashicorp, Fiserv, and numerous more, use Linkerd in their production.
Why is linkerd used?
To address issues with the administration and operation of expansive applications and systems, Linkerd was developed. The behaviour of an application during runtime depends critically on how services interact with one another.
Linkerd allows developers improved visibility and dependability by giving them a layer of abstraction to regulate this connection. The connection between services is a crucial part of the runtime behaviour of an application. Without a specific layer of control, it may be very difficult to assess and identify application issues, which is where Linkerd can be of assistance.
The requirement for better monitoring
Our business recently pushed for improved analytics for our microservices. We specifically wanted to know:
We had some coverage through our monitoring. Specifically, our ingress controller's response code and latency data, as well as additional in-depth metrics from certain apps that have been instrumented using Prometheus. These frequently don't provide enough details to enable speedy problem diagnosis and failure point identification. We thus opted to add a service mesh solution to our Kubernetes clusters to improve our monitoring.
What advantages does Linkerd offer?
Refer to some design goals of linkerd here.
What distinguishes Istio and Linkerd from other service mesh?
Istio is heavier and more complicated than Linkerd. Security features such as on-by-default mTLS, data planes incorporated into Rust, memory-safe languages, and frequent security audits are all part of its ground-up construction. Finally, it is hosted by CNCF and has a stated commitment to open source.
This is how linkerd works in the Control and Data plane.
Installation of a development cluster using Linkerd.
Linkerd demo installations may be converted to semi-production installations by making simple modifications to the default installation.
The first two bullet items are taken care of by Linkerd's helm installation and high availability option values.YAML file. As of version 2.7.0 of Linkerd, you can establish your root authority and issuer certificates independently, and cert manager handles managing cert-rotation, which takes care of bullet point three.
It is simple to manage the helm chart with ArgoCD or any other gitops solution, but the app frequently appears to be out of sync with its current manifests in Kubernetes. Your initial Helm installation will never match what is now in use due to the issuer certificate rotation carried out using cert-manager and the built-in certificate rotation that rotates each control-plane component tls certificate daily. Even worse, synchronizing the app will make those certificates invalid and could need restarting some specific control plan components to enable sidecar injection. When we need to upgrade Linkerd in the future, we should keep this in mind.
The linkerd-web deployment's port may be forwarded to your local machine to view the dashboard that Linkerd offers. The dashboard gives connection information like success rate, latency, and request rate, as well as a dependency graph of your services and call routing functionality.
The dashboard should be accessible to all developers and product managers without the requirement to install the Linkerd CLI or port-forward the Linkerd-web deployment. We found that the dashboard establishes a WebSocket connection with the end user's computer and that the AWS-created elb's(Elastic Load Balancers) by our Traefik LoadBalancer services will omit the headers required to establish this connection. Instead, we installed a new load balancer and used an Nginx server to supply the missing headers.
So, it's very simple to install let's discuss the hands-on version in part 2/4 of the article series.
The most important lesson from this is that we spent the necessary time to find and address issues in development, which allowed for a flawless installation of Linkerd in production. If not addressed, each of the problems we found in our development environment may have had severe effects on production. It's crucial to migrate gradually and to give yourself enough time to identify and address setup-related problems if you want to put Linkerd or any other service mesh into production.