Linkerd - The modern-day Service Mesh (Part 1/5)

Linkerd - The modern-day Service Mesh (Part 1/5)

Thanks to Naveen Verma sir for motivating me to study Service Mesh,he did an amazing session at Azure Developer Community and had a chance to learn a lot about the subject of Service mesh.

Here is the session:

Applications may be made more visible, dependable, and secure with the use of a service mesh. Instead of the application layer, these features are offered at the platform layer. A service mesh is often implemented as a collection of sidecar-shaped network proxies, with application code controlling their operation.

Organizations may utilize a service mesh to swiftly respond to consumer needs and competitive threats in addition to encrypting communication, establishing zero-trust networks, and successfully verifying identity. Every firm must have reliable application performance.

Let's Start from the Stone Age -> I am not talking from mainframes but let's talk monoliths.

(Most of my mentors came from Monolith Era😂😂😂😂)

Before the world had to deal with complex monoliths

No alt text provided for this image

A software program constructed as a cohesive entity that is independent of other programs and self-contained is referred to as having a monolithic architecture. A monolith architecture for software design isn't far from what the name "monolith" is frequently associated with: something enormous and glacial.

Monoliths are simpler to deploy because just one jar or war file must be uploaded. comparatively simpler and easier to create than microservices architecture. Compared to a microservices design, network latency and security issues are comparatively less of a concern.

The construction and deployment of mechanisms in monoliths are more difficult and time-consuming due to their large size and increased dependencies. Microservices are scalable and less resource-sensitive. It is simpler to construct and deploy since the modules are not connected.

Thus, Microservices came up with improved scalability: Because they are granular

Each component of the microservices method may be scaled individually, which is an additional benefit. In comparison to monoliths, where the entire program must be scaled even when there is no need for it, the entire process is therefore more time- and cost-efficient.

What are Microservices?

Microservices are a set of code, or the Software is created using an architectural and organizational strategy known as microservices, which consists of tiny, autonomous services that interact over well-defined APIs. Owners of these services are discrete, tiny teams.

  • Microservice architecture is a type of service-oriented architecture in which an application is organized as a series of loosely connected services.
  • Microservices architecture is characterized by fine-grained services and lightweight protocols.
  • Communication through well-defined API’s
  • This is a separate Business logic instead of one Problem statement

No alt text provided for this image

This is how DevOps engineers need to Architect

No alt text provided for this image

Databases (DB) is now being set from a different location to the microservices 

Advantages of Microservices

  • Greater agility
  • Faster time to market
  • Better scalability
  • Faster development cycles (easier deployment and debugging)
  • Easier to create a Cl/CD pipeline for single-responsibility  services
  • Isolated services have better fault Tolerance
  • Platform- and language-agnostic services
  • Cloud-readiness

In Summary

  • Microservices may be deployed individually, giving teams more control.
  • Microservices are scalable in their own right.
  • Microservices decrease downtime by isolating faults.
  • The smaller codebase allows teams to grasp the code more quickly, making it easier to maintain.

Cons of Microservices Architecture

  • More collaboration is required (each team has to cover the whole microservice lifecycle)
  • Because of the architecture's complexity, it's more difficult to test and monitor.
  • Due to the requirement for microservices to communicate, performance will suffer (network latency, message processing, etc.)
  • It's more difficult to keep the network up to date (has less fault tolerance, needs more load balancing, etc.)
  • Doesn't function until there's a strong company culture in place (DevOps culture, automation, practices, etc.)
  • Security concerns (harder to maintain transaction safety, distributed communication goes wrong more likely, etc.)

What are the Advantages that Service Meshes Bring to us

  • Real-time platform health metrics:For any meshed task, instantly monitor success rates, latencies, and request volumes without making any configuration changes.
  • simpler than any mesh Minimalist design that is inherent to Kubernetes. No hidden magic, the least amount of YAML, and the fewest CRDs.
  • TLS with zero-config mutual:Any on-cluster TCP communication can have mutual TLS added transparently and without setup.
  • Self-contained control plane, progressive data deployment, and a tonne of diagnostic and debugging tools—all designed by engineers, for engineers.
  • Drop-in features for dependability: To make your applications durable, add right away latency-aware load balancing, request retries, timeouts, and blue-green deployments.
  • modern, lightweight Rust data plane :Amazingly little and lightning-quick Rust-based Linkerd2-proxy micro-proxy for performance and security.

A service mesh's benefits

A sidecar that proxies all of the application's network traffic are injected into your application pods by a service mesh to perform its function. For each request, the sidecar will collect data like latency, request rate, success rate, status code, host, and destination services. Additionally, to provide end-to-end encryption inside your cluster and avoid network snooping, the sidecars will establish encrypted TLS connections with one another. The sidecars bring more functionality to intelligent routing between your services, to sum it up. This may involve employing circuit breakers, retries, timeouts, load-balancing, canary deployments, and blue-green deployments.

Our requirement for a service mesh was principally motivated by the requirement for better monitoring. Although they are good to have, mutual ties and intelligent routing are not our main concerns. We also wanted a service mesh that was easy to use and required little maintenance.

We dug down deep in the resources and found out Linkerd was a great tool and saw this video and this changed the way I thought of Service Mesh

A command-line program is available from Linkerd to set up and install Linkerd. In no time, I had set up a test cluster using linkerd, installed the sample "books app," instrumented route monitoring, linkerd the metrics to our Prometheus server, and used their dashboard application to get visibility into the performance of the applications. The Linkerd website's material was fairly simple to understand, and I had little trouble setting up a minimally functional version.

Once the demo version of Linkerd had been deployed successfully, we put it on our test Kubernetes cluster and gradually added proxies to each namespace.

So, what is Linkerd?

No alt text provided for this image

Linkerd is an open-source service mesh that was created to be deployed into several container schedulers and frameworks, including Kubernetes. It is regarded as the lightest and quickest service mesh. Without even a single line of code change, it offers observability, dependability, and vital security for Kubernetes applications. It effortlessly integrates with Kubernetes and is capable of comfortably handling thousands of requests per second. Many firms, including PayPal, Expedia, Hashicorp, Fiserv, and numerous more, use Linkerd in their production.

Why is linkerd used?

To address issues with the administration and operation of expansive applications and systems, Linkerd was developed. The behaviour of an application during runtime depends critically on how services interact with one another.

Linkerd allows developers improved visibility and dependability by giving them a layer of abstraction to regulate this connection. The connection between services is a crucial part of the runtime behaviour of an application. Without a specific layer of control, it may be very difficult to assess and identify application issues, which is where Linkerd can be of assistance.

The requirement for better monitoring

Our business recently pushed for improved analytics for our microservices. We specifically wanted to know:

  • How long it took for services to reply, which endpoints of service took the longest, and how long it took for response times to be distributed.
  • When a service failed, what route it was on, and how it failed (status code, logs).
  • What services are a service's dependents, and which services are at blame for outages?

We had some coverage through our monitoring. Specifically, our ingress controller's response code and latency data, as well as additional in-depth metrics from certain apps that have been instrumented using Prometheus. These frequently don't provide enough details to enable speedy problem diagnosis and failure point identification. We thus opted to add a service mesh solution to our Kubernetes clusters to improve our monitoring.

What advantages does Linkerd offer?

No alt text provided for this image

Refer to some design goals of linkerd here.

  • Linkerd is entirely open-source software.
  • extremely vibrant community
  • Along with Istio, Linkerd is now one of the most popular Service Meshes.
  • In the cluster, setting up Istio is fairly challenging, while Linkerd requires no configuration.
  • Languages or libraries are not necessary for Linkerd to function.
  • Scaling is made simple using Linkerd.
  • Almost all commonly used protocols, including HTTP, HTTP/2, and gRPC, are supported.
  • enables TLS across the board.
  • Utilizing cutting-edge load-balancing algorithms, Linkerd intelligently distributes traffic.
  • allows for dynamic request routing and changes traffic as necessary.
  • Linkerd offers distributed tracing to identify problems' underlying causes.
  • With the current Microservices Architecture, Linkerd integrates effectively.
  • Resilience, observability, and load balancing are qualities that Linkerd delivers.
  • Prometheus and Grafana are included right out of the box.
  • Linkerd provides a dashboard that is useful for seeing real-time events.

What distinguishes Istio and Linkerd from other service mesh?

Istio is heavier and more complicated than Linkerd. Security features such as on-by-default mTLS, data planes incorporated into Rust, memory-safe languages, and frequent security audits are all part of its ground-up construction. Finally, it is hosted by CNCF and has a stated commitment to open source.

No alt text provided for this image

This is how linkerd works in the Control and Data plane.

Installation of a development cluster using Linkerd.

Linkerd demo installations may be converted to semi-production installations by making simple modifications to the default installation.

  • It is advisable to commit the installation and setup of Linkerd to code.
  • High availability mode installation is recommended for Linkerd.
  • The secrets ought to be handled by a secret-management service rather than being committed to git.
  • We anticipate that Linkerd will work nicely with ArgoCD, and Gitops solutions.
  • Users should be able to access the Linkerd dashboard without having to install and configure the Linkerd cli.
  • Linkerd's increased monitoring should be simple for developers to integrate into their applications.

The first two bullet items are taken care of by Linkerd's helm installation and high availability option values.YAML file. As of version 2.7.0 of Linkerd, you can establish your root authority and issuer certificates independently, and cert manager handles managing cert-rotation, which takes care of bullet point three.

It is simple to manage the helm chart with ArgoCD or any other gitops solution, but the app frequently appears to be out of sync with its current manifests in Kubernetes. Your initial Helm installation will never match what is now in use due to the issuer certificate rotation carried out using cert-manager and the built-in certificate rotation that rotates each control-plane component tls certificate daily. Even worse, synchronizing the app will make those certificates invalid and could need restarting some specific control plan components to enable sidecar injection. When we need to upgrade Linkerd in the future, we should keep this in mind.

The linkerd-web deployment's port may be forwarded to your local machine to view the dashboard that Linkerd offers. The dashboard gives connection information like success rate, latency, and request rate, as well as a dependency graph of your services and call routing functionality.

The dashboard should be accessible to all developers and product managers without the requirement to install the Linkerd CLI or port-forward the Linkerd-web deployment. We found that the dashboard establishes a WebSocket connection with the end user's computer and that the AWS-created elb's(Elastic Load Balancers) by our Traefik LoadBalancer services will omit the headers required to establish this connection. Instead, we installed a new load balancer and used an Nginx server to supply the missing headers.

So, it's very simple to install let's discuss the hands-on version in part 2/4 of the article series.

The most important lesson from this is that we spent the necessary time to find and address issues in development, which allowed for a flawless installation of Linkerd in production. If not addressed, each of the problems we found in our development environment may have had severe effects on production. It's crucial to migrate gradually and to give yourself enough time to identify and address setup-related problems if you want to put Linkerd or any other service mesh into production.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics