Envoy is an open-source edge and service proxy developed by Lyft, which has gained significant popularity in the cloud-native ecosystem. It is designed to provide a flexible and scalable infrastructure layer for modern distributed systems.
As a proxy, Envoy sits between clients and services, acting as an intermediary for network communication. It is typically deployed as a sidecar alongside each service instance or as a centralized proxy in front of a fleet of services. Envoy is written in C++ and designed for high performance and low latency.
Envoy offers a wide range of features and capabilities that make it well-suited for modern microservices architectures. Here are some key aspects of Envoy:
- Service Discovery: Envoy integrates with various service discovery mechanisms, such as DNS, static configuration, and dynamic service registries like Consul or etcd. It can automatically discover and track available service instances, providing a dynamic and resilient routing infrastructure.
- Load Balancing: Envoy offers sophisticated load balancing algorithms and strategies, including round-robin, least connections, and consistent hashing. It distributes traffic across service instances to improve availability, scalability, and performance.
- Traffic Management: Envoy provides advanced traffic management capabilities, such as circuit breaking, timeouts, retries, and request/response transformation. These features help handle network failures, prevent cascading failures, and ensure reliable and predictable service communication.
- Security: Envoy supports Transport Layer Security (TLS) encryption and authentication. It can terminate SSL/TLS connections, handle certificates, and enforce authentication policies to secure communications between clients and services.
- Observability: Envoy offers extensive observability features, including access logging, metrics collection, and distributed tracing. It provides visibility into network traffic, performance metrics, and request tracing, allowing for monitoring, debugging, and performance analysis.
- Extensibility: Envoy has a highly extensible architecture, allowing for the addition of custom filters and plugins. Developers can build and integrate custom logic for tasks like authentication, rate limiting, transformation, and more.
Envoy has gained popularity due to its performance, scalability, and the rich set of features it provides. It is used by several organizations and projects, including Kubernetes (as the default ingress controller), Istio, and many other cloud-native technologies. Its flexibility and extensibility make it suitable for a wide range of use cases, from small deployments to large-scale, complex architectures.
Envoy fits in as a key component in modern cloud-native architectures, particularly in microservices environments. It primarily serves as an edge and service proxy, providing a variety of features and capabilities to enhance the communication, resilience, and observability of services.
Here are a few common places where Envoy fits within the architecture:
- Service Mesh: Envoy is often used as the data plane component in service mesh architectures, such as Istio, Linkerd, or Consul Connect. In this context, Envoy is deployed as a sidecar alongside each service instance, enabling fine-grained traffic management, service discovery, load balancing, and observability.
- API Gateway: Envoy can be utilized as an API gateway or ingress controller. It acts as the entry point for external traffic, providing features like SSL termination, routing, request transformation, rate limiting, and authentication. Envoy's flexibility allows it to handle high volumes of traffic and adapt to evolving requirements.
- Load Balancer: Envoy can be used as a standalone load balancer to distribute traffic across multiple backend service instances. It supports various load balancing algorithms and health-checking mechanisms to ensure reliable and efficient traffic distribution.
- Edge Proxy: Envoy can be deployed as an edge proxy, serving as the first point of contact for external traffic before reaching the backend services. It provides security features such as SSL/TLS termination, DDoS protection, and authentication, helping to secure and control incoming requests.
- Internal Proxy: Envoy can also be used as an internal proxy within a network, facilitating communication between services within a cluster or across different clusters. It helps manage and optimize traffic between services, ensuring reliability, resilience, and observability.
By providing a consistent set of features across these different roles, Envoy simplifies the management and operation of network traffic in complex distributed systems. It abstracts away many network-related concerns, allowing developers and operators to focus on building and scaling their services while benefiting from the advanced capabilities provided by Envoy.
Internal components of Envoy:
+--------------------------------------
| |
| Envoy Proxy |
| |
+--------------------------------------+
| ^
Upstream |
Connections |
| |
+--------------------------------------+
| |
| Proxy Core |
| |
+--------------------------------------+
| ^
Downstream |
Connections |
| |
+--------------------------------------+
| |
| Listeners |
| |
+--------------------------------------+
| ^
Network |
Connections |
| |
+--------------------------------------+
| |
| Filters |
| |
+--------------------------------------+
| ^
Inbound/Outbound Traffic |
| |
+--------------------------------------+
| |
| Service Discovery |
| |
+--------------------------------------+
| ^
Upstream/Downstream |
Services |
| |
+--------------------------------------+
| |
| Configuration |
| |
+--------------------------------------+
Envoy is primarily composed of the following components:
- Proxy Core: The Proxy Core forms the foundational component of Envoy and is responsible for handling the network communication between clients and services. It provides features like network address translation, buffering, connection pooling, and handling low-level protocols.
- Listeners: Listeners define the ports and protocols on which Envoy should listen for incoming traffic. They handle network connections, SSL/TLS termination, and protocol detection.
- Filters: Filters are pluggable components that allow Envoy to inspect, transform, and act upon inbound and outbound traffic. Filters can perform tasks like authentication, authorization, rate limiting, request/response transformation, circuit breaking, and more. Envoy supports both built-in filters and custom filters developed using the WebAssembly (Wasm) standard.
- Upstream and Downstream Connections: Envoy manages connections between the proxy and upstream services as well as downstream clients. It handles load balancing, connection pooling, health checking, and retry mechanisms to ensure reliable and efficient communication.
- Service Discovery: Envoy integrates with service discovery mechanisms to automatically discover and track available service instances. It can work with DNS, static configuration, or dynamic service registries like Consul, etcd, or Kubernetes.
- Configuration: Envoy's behavior and features are configured through YAML-based configuration files (typically envoy.yaml). This configuration includes settings for listeners, filters, routing rules, timeouts, circuit breakers, and other operational aspects of Envoy.
Regarding the operating system, Envoy is designed to run on various platforms and operating systems. It is primarily developed and tested on Linux-based systems. However, Envoy is also known to be compatible with other operating systems, including macOS and Windows. The specific operating system requirements and installation instructions can be found in the Envoy documentation and the platform-specific deployment guides.
Envoy's modular architecture and flexibility allow it to be used in a wide range of environments and deployment scenarios, making it a popular choice in cloud-native and microservices architectures.
This article is planned to discuss more about the envoy filters, so let see how envoy is supporting the execution of filters especially custom filters:
Envoy utilizes a runtime system called the Wasmtime to execute the WebAssembly (Wasm) modules that contain custom filters or extensions. The Wasm Runtime provides the necessary infrastructure and capabilities to load, instantiate, and execute the custom filters within the Envoy proxy.
Here's an overview of the components involved in the wasam run time within Envoy:
The below components are aligned with the general architecture of Envoy's Wasm support which is same for wasamtime also:
- Wasm VM: The Wasm Virtual Machine (VM) is responsible for executing the compiled WebAssembly bytecode. Envoy supports different Wasm VMs, including V8,wasamtime and WAVM (WebAssembly Virtual Machine).
- Host ABI: The Host ABI (Application Binary Interface) is the interface between the Wasm VM and the host environment, which in this case is Envoy. It allows the Wasm module to interact with the Envoy proxy and its runtime environment, accessing resources and invoking specific functions provided by Envoy.
- Wasm Sandbox: The Wasm Sandbox provides a secure and isolated execution environment for the Wasm modules. It ensures that the custom filters run in a controlled and isolated context, preventing them from interfering with the stability and security of the Envoy proxy and other modules.
- Wasm Extensions API: Envoy exposes an API called the Wasm Extensions API that allows the Wasm modules to interact with the proxy and its lifecycle events. This API provides hooks for various events, such as initialization, configuration, network events, and logging, enabling the custom filters to interact with the proxy and influence the traffic processing.
The use of the Wasm Runtime in Envoy enables the dynamic and flexible deployment of custom filters and extensions, empowering developers to extend Envoy's capabilities and tailor its behavior to their specific requirements.
Overview of how Envoy runs Wasm modules at runtime:
The wasam runtime(usually wasmtime) within Envoy integrates the above components, allowing the loading, instantiation, and execution of custom filters written in WebAssembly. When Envoy starts up, it loads the Wasm modules specified in the configuration, initializes the Wasm VM, and sets up the Wasm Sandbox. The custom filters can then interact with the Envoy proxy through the provided Host ABI and leverage the Wasm Extensions API to handle network traffic, perform transformations, and implement custom logic.
- Wasm Module Loading: When Envoy starts up or dynamically reloads its configuration, it loads the Wasm modules specified in the configuration. The Wasm modules can be referenced by their file paths or fetched remotely from a designated source. This is done via envoy.yaml file located in envoy path i.e /etc/envoy/envoy.yaml
- Wasm VM and Sandbox: Envoy utilizes a WebAssembly Virtual Machine (Wasm VM), such as Wasamtime,Wasmer or WAVM, to execute the Wasm modules. The Wasm VM provides the runtime environment necessary for executing Wasm bytecode. Within the VM, the Wasm Sandbox provides isolation and security by sandboxing the execution of the Wasm modules.
- Module Lifecycle Management: Envoy manages the lifecycle of the loaded Wasm modules. It handles the instantiation, initialization, and termination of the modules as necessary. This allows modules to be dynamically added, removed, or updated without requiring a restart of the Envoy proxy.
- Integration with Envoy's Proxy Core: Once a Wasm module is loaded and running, it can interact with the Envoy proxy core. Envoy provides a set of APIs and hooks that allow the Wasm modules to tap into the traffic flow, inspect and modify requests and responses, and influence the behavior of Envoy. This integration enables custom logic and functionality to be executed during the processing of network traffic.
- Communication with External Entities: Wasm modules running within Envoy can communicate with external entities, such as other services, databases, or external APIs. Envoy provides mechanisms for Wasm modules to make network calls, access shared data, or communicate through event-driven interfaces, allowing them to perform complex tasks and integrate with external systems.
By leveraging the Wasm Sandbox and its integration with the Envoy proxy core, custom filters written in WebAssembly can seamlessly extend the functionality of Envoy at runtime. This runtime extensibility enables developers to add custom business logic, implement custom protocols, perform advanced transformations, and integrate with external services within the Envoy proxy environment.
Let's dive little more in to Envoy filters:
There are 2 types of envoy filters as below :
- Built-in Filters: Envoy provides a rich set of built-in filters out of the box. These filters are pre-built components that offer various functionalities and can be configured to perform specific tasks during the processing of network traffic. Examples of built-in filters in Envoy include filters for logging, routing, load balancing, authentication, rate limiting, and more. These filters are developed and maintained by the Envoy project itself.
- Custom Filters with WebAssembly (Wasm): Envoy also allows developers to extend its functionality by writing and deploying custom filters. WebAssembly (Wasm) is a portable binary instruction format that allows developers to run code in a sandboxed environment within Envoy. By leveraging Wasm, developers can write custom filters in languages like C++, Rust, or AssemblyScript, and compile them into Wasm modules. These custom filters can then be loaded and executed by Envoy at runtime, providing the flexibility to add custom logic and features specific to their use cases.
The use of Wasm enables Envoy to support a wide range of third-party and community-developed filters. These custom filters can be shared and reused across different Envoy deployments and can provide functionality beyond what is available in the built-in filters. The ability to write and deploy custom filters using Wasm significantly extends Envoy's capabilities and allows for the adaptation of Envoy to various specific requirements.
Envoy provides an extensible filter architecture that enables the combination of built-in and custom filters to implement complex traffic handling and processing workflows. This extensibility allows developers to tailor Envoy's behavior according to their specific needs and integrate it seamlessly into their infrastructure.
Repository link for custom filters: WASM Hub (webassemblyhub.io)
Deploying the envoy filters:
To deploy Envoy filters (such as Wasm modules) in the Anypoint Platform, we'll need to follow these general steps:
- Build or obtain the Envoy filter: First, you need to build or obtain the Envoy filter or Wasm module that we want to deploy. We can build custom filters using the WebAssembly (Wasm) SDK ex proxy_wasm or find existing filters from open-source repositories or vendors.
- Prepare your Anypoint Platform: Ensure we have the necessary access and permissions to deploy and configure Envoy filters within our Anypoint Platform environment. This may involve working with platform administrator or infrastructure team.
- Configure and deploy Envoy: Set up an Envoy deployment within Anypoint Platform environment. This can be achieved using the Anypoint Runtime Fabric, which provides a Kubernetes-based runtime for deploying and managing applications.
- Configure the Envoy filter: Modify the Envoy configuration (typically in an envoy.yaml file) to include the filter you want to deploy. The specific configuration will depend on the filter or Wasm module you are using. Refer to the documentation provided by the filter's developer for instructions on how to configure it.
- Deploy the Envoy configuration: Deploy the updated Envoy configuration to your Anypoint Platform environment. This can involve uploading the envoy.yaml file or using platform-specific deployment mechanisms.
- Validate and monitor: Verify that the Envoy filter is deployed and functioning correctly. Monitor the logs, metrics, and observability tools available in the Anypoint Platform to ensure the filter is operating as expected.
It's important to note that the exact steps and procedures for deploying Envoy filters may vary depending on our specific Anypoint Platform setup and the capabilities provided by the platform. It's recommended to consult the Anypoint Platform documentation or reach out to the platform support or administrator for detailed guidance and best practices on deploying Envoy filters within the environment.
envoy.yaml
envoy.yaml is a configuration file used by the Envoy proxy, which is an open-source edge and service proxy designed for cloud-native applications. Envoy is often used in microservices architectures to provide service discovery, load balancing, security, observability, and other features.
The envoy.yaml file contains the configuration settings for Envoy, specifying how it should behave and route traffic. It uses the YAML (YAML Ain't Markup Language) format, which is a human-readable data serialization format.
Here are some of the capabilities and features that can be configured in an envoy.yaml file:
- Listener Configuration: Envoy can listen on specific ports and protocols (e.g., HTTP, HTTPS, TCP) and perform tasks such as TLS termination, rate limiting, and protocol detection.
- Filter Chains: Envoy allows the definition of filter chains, which enable the inspection and manipulation of inbound and outbound traffic. Filters can perform tasks like authentication, authorization, request/response transformation, and traffic management.
- Routing Configuration: Envoy supports powerful routing configurations, including path-based routing, header-based routing, weighted load balancing, circuit breaking, retries, timeouts, and fault injection. It can route traffic to different upstream services based on specific rules.
- Service Discovery: Envoy integrates with service discovery mechanisms like DNS, static configuration, and dynamic service registries (e.g., Consul, etcd, Kubernetes) to automatically discover and load balance requests across available service instances.
- Health Checking: Envoy can periodically check the health of upstream service instances to ensure they are functioning properly. Unhealthy instances can be removed from the pool automatically.
- Observability: Envoy provides extensive observability features, including access logs, metrics, and distributed tracing. These features allow for monitoring and debugging of network traffic and can integrate with various observability tools and systems.
- Security: Envoy supports various security features such as SSL/TLS encryption, authentication, and authorization. It can terminate SSL/TLS connections, perform mutual TLS authentication, and enforce access control policies.
- Rate Limiting: Envoy can enforce rate limits on incoming requests to prevent abuse and protect services from being overwhelmed. It supports different rate limiting algorithms and can integrate with external rate limiting services.
These are just a few examples of the capabilities and features provided by Envoy. The envoy.yaml file allows you to configure and fine-tune these features according to your specific requirements in your deployment environment.
From where we can find the envoy binary?
Envoy binary from the official Istio proxy image: It can be used to test.
Types of filters:
Envoy provides a flexible architecture that allows developers to build different types of filters to customize and extend the functionality of the proxy. Here are the different types of filters that can be built for Envoy:
- Network Filters: Network filters operate at the transport layer and can intercept and manipulate network traffic at the TCP or UDP level. They can perform operations like traffic shaping, load balancing, authentication, and encryption. Examples of network filters in Envoy include TCP proxy, HTTP proxy, and TLS termination.
- HTTP Filters: HTTP filters operate at the application layer and can modify and inspect HTTP requests and responses. They can add or remove headers, modify the body, perform authentication and authorization, implement rate limiting, and enable content transformation. Examples of HTTP filters in Envoy include header manipulation, rate limiting, authentication, and CORS (Cross-Origin Resource Sharing) enforcement.
- TCP Proxy Filters: TCP proxy filters operate on TCP traffic and allow for layer-4 load balancing, routing, and traffic manipulation. They can intercept and modify TCP packets, route connections to different upstream hosts, perform health checks, and handle connection timeouts and retries.
- HTTP Router Filters: HTTP router filters handle the routing of HTTP requests to appropriate upstream services based on predefined rules. They match the requests against specified criteria such as path, headers, or other attributes and direct the traffic to the appropriate destination.
- Access Logging Filters: Access logging filters are responsible for capturing and recording information about incoming and outgoing requests and responses. They can log details like request headers, response codes, response sizes, and latency, which can be useful for monitoring, auditing, and analysis purposes.
- Transport Filters: Transport filters allow manipulation and analysis of the entire request and response stream, including both headers and body. They can perform transformations, compression, decompression, encryption, and decryption of the data.
- UDP Filters: UDP filters operate on UDP traffic and can modify and inspect UDP packets. They can perform tasks like load balancing, packet filtering, or protocol-specific operations on UDP-based protocols.
These are some of the common types of filters that can be built for Envoy. Each type of filter serves a specific purpose and provides hooks into the request/response processing flow, allowing developers to customize and extend Envoy's behavior to suit their application requirements.
Example of custom filters:
Example of an Envoy filter implemented using the Proxy-Wasm SDK which modify the upstream request header.
use proxy_wasm::traits::*;
use proxy_wasm::types::*;
use std::convert::TryFrom;
#[no_mangle]
pub fn _start() {
proxy_wasm::set_http_context(|context_id, _| {
Box::new(HeaderModifier {
context_id,
header_name: "X-Custom-Header",
header_value: "Modified Value",
})
});
}
struct HeaderModifier {
context_id: u32,
header_name: &'static str,
header_value: &'static str,
}
impl Context for HeaderModifier {}
impl HttpContext for HeaderModifier {
fn on_http_request_headers(&mut self, _: usize) -> Action {
self.set_http_request_header(self.header_name, self.header_value);
Action::Continue
}
}
In the above example, we define a Rust module that uses the Proxy-Wasm SDK to implement an Envoy filter. The filter modifies the request header of an upstream request by adding a custom header with a modified value.
The _start function is the entry point of the filter. It sets up the HTTP context and creates an instance of the HeaderModifier struct, which holds the necessary information for modifying the header.
The HeaderModifier struct implements the Context and HttpContext traits provided by the Proxy-Wasm SDK. The on_http_request_headers function is called when the filter receives the request headers. Within this function, we use the set_http_request_header method to modify the request header by adding a new custom header with the specified name and value.
Note: The above code is just a minimal example for reference purposes. In a real-world scenario, you may need to handle error conditions, handle more complex logic, and configure the filter's behavior based on specific requirements.
We often hear the world cloud native ecosystem as a newbie I would like to understand its meaning, below is sincere attempt!
Cloud-native ecosystem:
The term "cloud-native ecosystem" refers to the collection of technologies, practices, and frameworks that are specifically designed to support and enable the development and deployment of applications in cloud environments. It encompasses a set of principles and tools that facilitate the creation and operation of scalable, resilient, and portable applications optimized for cloud platforms.
Key characteristics of the cloud-native ecosystem include:
- Microservices Architecture: Cloud-native applications are typically built using a microservices architecture. This architectural style involves breaking down an application into small, loosely coupled services that can be developed, deployed, and scaled independently. Each microservice focuses on a specific business capability and communicates with other services through lightweight protocols such as HTTP or messaging systems.
- Containerization: Containers are a fundamental building block of the cloud-native ecosystem. They provide a lightweight and portable runtime environment that encapsulates an application and its dependencies. Containers allow for consistent deployment across different environments, enabling applications to run reliably and consistently regardless of the underlying infrastructure.
- Orchestration and Management: Cloud-native applications often leverage container orchestration platforms such as Kubernetes. These platforms automate the deployment, scaling, and management of containers across a cluster of machines. They provide capabilities for service discovery, load balancing, health monitoring, and self-healing, ensuring that applications are resilient, scalable, and highly available.
- DevOps Practices: The cloud-native ecosystem encourages the adoption of DevOps practices, which emphasize collaboration, automation, and continuous delivery. DevOps enables development and operations teams to work closely together, streamlining the software development lifecycle and facilitating rapid and frequent deployment of new features and updates.
- Infrastructure as Code: The cloud-native ecosystem embraces the concept of infrastructure as code (IaC). IaC involves defining infrastructure resources, such as virtual machines, networks, and storage, using declarative configuration files. Infrastructure can be provisioned, managed, and scaled programmatically, leading to increased efficiency, consistency, and reproducibility.
- Observability and Monitoring: Cloud-native applications are designed with observability in mind. They incorporate logging, metrics, and distributed tracing to gain insights into the behavior and performance of the application. Monitoring and observability tools help detect and troubleshoot issues, optimize performance, and ensure the reliability of cloud-native applications.
- Cloud Services and Platforms: The cloud-native ecosystem takes advantage of cloud services and platforms provided by public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These services offer managed databases, storage, messaging queues, machine learning capabilities, and more, allowing developers to leverage pre-built functionality and focus on application logic rather than infrastructure management.