ByteByteGo

ByteByteGo

Software Development

San Francisco, California 550,996 followers

Weekly system design newsletter you can read in 10 mins.

About us

A popular weekly newsletter covering topics and trends in large-scale system design, from the authors of the best-selling System Design Interview series.

Industry
Software Development
Company size
1 employee
Headquarters
San Francisco, California
Type
Privately Held

Locations

Employees at ByteByteGo

Updates

  • I’ve been writing the system design newsletter for 12 months. Here are the 5 most popular ones: 👇 1. From 0 to Millions: A Guide to Scaling Your App 2. A Crash Course in Caching 3. API Architectural Styles 4. How does ChatGPT work? 5. 8 Data Structures That Power Your Databases Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/3FEGliw .

    • No alternative text description for this image
  • What happens when you type a URL into a browser? Let’s look at the process step by step. Step 1: The user enters a URL (bytebytego .com) into the browser and hits Enter. The first thing we need to do is to translate the URL to an IP address. The mapping is usually stored in a cache, so the browser looks for the IP address in multiple layers of cache: the browser cache, OS cache, local cache, and ISP cache. If the browser couldn’t find the mapping in the cache, it will ask the DNS (Domain Name System) resolver to resolve it. Step 2: If the IP address cannot be found at any of the caches, the browser goes to DNS servers to do a recursive DNS lookup until the IP address is found. Step 3: Now that we have the IP address of the server, the browser sends an HTTP request to the server. For secure access of server resources, we should always use HTTPS. It first establishes a TCP connection with the server via TCP 3-way handshake. Then it sends the public key to the client. The client uses the public key to encrypt the session key and sends to the server. The server uses the private key to decrypt the session key. The client and server can now exchange encrypted data using the session key. Step 4: The server processes the request and sends back the response. For a successful response, the status code is 200. There are 3 parts in the response: HTML, CSS and Javascript. The browser parses HTML and generates DOM tree. It also parses CSS and generates CSSOM tree. It then combines DOM tree and CSSOM tree to render tree. The browser renders the content and display to the user. -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • Authentication in REST APIs acts as the crucial gateway, ensuring that solely authorized users or applications gain access to the API's resources.    Some popular authentication methods for REST APIs include:    1. Basic Authentication:  Involves sending a username and password with each request, but can be less secure without encryption.    When to use:  Suitable for simple applications where security and encryption aren’t the primary concern or when used over secured connections.    2. Token Authentication:  Uses generated tokens, like JSON Web Tokens (JWT), exchanged between client and server, offering enhanced security without sending login credentials with each request.    When to use:  Ideal for more secure and scalable systems, especially when avoiding sending login credentials with each request is a priority.    3. OAuth Authentication:  Enables third-party limited access to user resources without revealing credentials by issuing access tokens after user authentication.    When to use:  Ideal for scenarios requiring controlled access to user resources by third-party applications or services.    4. API Key Authentication:  Assigns unique keys to users or applications, sent in headers or parameters; while simple, it might lack the security features of token-based or OAuth methods.    When to use:  Convenient for straightforward access control in less sensitive environments or for granting access to certain functionalities without the need for user-specific permissions.    Over to you:  Which REST API authentication method do you find most effective in ensuring both security and usability for your applications? -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • REST API Vs. GraphQL    When it comes to API design, REST and GraphQL each have their own strengths and weaknesses.    REST  - Uses standard HTTP methods like GET, POST, PUT, DELETE for CRUD operations.  - Works well when you need simple, uniform interfaces between separate services/applications.  - Caching strategies are straightforward to implement.  - The downside is it may require multiple roundtrips to assemble related data from separate endpoints.    GraphQL  - Provides a single endpoint for clients to query for precisely the data they need.  - Clients specify the exact fields required in nested queries, and the server returns optimized payloads containing just those fields.  - Supports Mutations for modifying data and Subscriptions for real-time notifications.  - Great for aggregating data from multiple sources and works well with rapidly evolving frontend requirements.  - However, it shifts complexity to the client side and can allow abusive queries if not properly safeguarded  - Caching strategies can be more complicated than REST.    The best choice between REST and GraphQL depends on the specific requirements of the application and development team. GraphQL is a good fit for complex or frequently changing frontend needs, while REST suits applications where simple and consistent contracts are preferred. -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • What is k8s (Kubernetes)?  .  .  k8s is a container orchestration system. It is used for container deployment and management. Its design is greatly impacted by Google’s internal system Borg.    A k8s cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.    The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.    🔹 Control Plane Components    1. API Server  The API server talks to all the components in the k8s cluster. All the operations on pods are executed by talking to the API server.    2. Scheduler  The scheduler watches the workloads on pods and assigns loads on newly created pods.    3. Controller Manager  The controller manager runs the controllers, including Node Controller, Job Controller, EndpointSlice Controller, and ServiceAccount Controller.    4. etcd  etcd is a key-value store used as Kubernetes' backing store for all cluster data.    🔹 Nodes  1. Pods  A pod is a group of containers and is the smallest unit that k8s administers. Pods have a single IP address applied to every container within the pod.    2. Kubelet  An agent that runs on each node in the cluster. It ensures containers are running in a Pod.    3. Kube Proxy  kube-proxy is a network proxy that runs on each node in your cluster. It routes traffic coming into a node from the service. It forwards requests for work to the correct containers. -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • How is data sent over the network? Why do we need so many layers in the OSI model? The diagram below shows how data is encapsulated and de-encapsulated when transmitting over the network. 🔹 Step 1: When Device A sends data to Device B over the network via the HTTP protocol, it is first added an HTTP header at the application layer. 🔹 Step 2: Then a TCP or a UDP header is added to the data. It is encapsulated into TCP segments at the transport layer. The header contains the source port, destination port, and sequence number. 🔹 Step 3: The segments are then encapsulated with an IP header at the network layer. The IP header contains the source/destination IP addresses. 🔹 Step 4: The IP datagram is added a MAC header at the data link layer, with source/destination MAC addresses. 🔹 Step 5: The encapsulated frames are sent to the physical layer and sent over the network in binary bits. 🔹 Steps 6-10: When Device B receives the bits from the network, it performs the de-encapsulation process, which is a reverse processing of the encapsulation process. The headers are removed layer by layer, and eventually, Device B can read the data. We need layers in the network model because each layer focuses on its own responsibilities. Each layer can rely on the headers for processing instructions and does not need to know the meaning of the data from the last layer. Over to you: Do you know which layer is responsible for resending lost data? --  Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • Netflix's Tech Stack.  .  .  This post is based on research from many Netflix engineering blogs and open-source projects. If you come across any inaccuracies, please feel free to inform us.    Mobile and web: Netflix has adopted Swift and Kotlin to build native mobile apps. For its web application, it uses React.    Frontend/server communication: GraphQL.    Backend services: Netflix relies on ZUUL, Eureka, the Spring Boot framework, and other technologies.    Databases: Netflix utilizes EV cache, Cassandra, CockroachDB, and other databases.    Messaging/streaming: Netflix employs Apache Kafka and Fink for messaging and streaming purposes.    Video storage: Netflix uses S3 and Open Connect for video storage.    Data processing: Netflix utilizes Flink and Spark for data processing, which is then visualized using Tableau. Redshift is used for processing structured data warehouse information.    CI/CD: Netflix employs various tools such as JIRA, Confluence, PagerDuty, Jenkins, Gradle, Chaos Monkey, Spinnaker, Altas, and more for CI/CD processes. --  Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • Logging, tracing and metrics are 3 pillars of system observability.  . . The diagram below shows their definitions and typical architectures. 🔹 Logging Logging records discrete events in the system. For example, we can record an incoming request or a visit to databases as events. It has the highest volume. ELK (Elastic-Logstash-Kibana) stack is often used to build a log analysis platform. We often define a standardized logging format for different teams to implement, so that we can leverage keywords when searching among massive amounts of logs. 🔹 Tracing Tracing is usually request-scoped. For example, a user request goes through the API gateway, load balancer, service A, service B, and database, which can be visualized in the tracing systems. This is useful when we are trying to identify the bottlenecks in the system. We use OpenTelemetry to showcase the typical architecture, which unifies the 3 pillars in a single framework. 🔹 Metrics Metrics are usually aggregatable information from the system. For example, service QPS, API responsiveness, service latency, etc. The raw data is recorded in time-series databases like InfluxDB. Prometheus pulls the data and transforms the data based on pre-defined alerting rules. Then the data is sent to Grafana for display or to the alert manager which then sends out email, SMS, or Slack notifications or alerts. 🔹 Over to you: Which tools have you used for system monitoring? --  Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • Time to test your Linux skills: What does /𝐮𝐬𝐫 mean? The Linux file system used to resemble an unorganized town where individuals constructed their houses wherever they pleased. However, in 1994, the Filesystem Hierarchy Standard (FHS) was introduced to bring order to the Linux file system. By implementing a standard like the FHS, software can ensure a consistent layout across various Linux distributions. Nonetheless, not all Linux distributions strictly adhere to this standard. They often incorporate their own unique elements or cater to specific requirements. To become proficient in this standard, you can begin by exploring. Utilize commands such as "cd" for navigation and "ls" for listing directory contents. Imagine the file system as a tree, starting from the root (/). With time, it will become second nature to you, transforming you into a skilled Linux administrator. Have fun exploring! Over to you: which directory did you use most frequently? --  Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image
  • CAP, BASE, SOLID, KISS, What do these acronyms mean? . . The diagram below explains the common acronyms in system designs. 🔹 CAP CAP theorem states that any distributed data store can only provide two of the following three guarantees: 1. Consistency - Every read receives the most recent write or an error. 2. Availability - Every request receives a response. 3. Partition tolerance - The system continues to operate in network faults. However, this theorem was criticized for being too narrow for distributed systems, and we shouldn’t use it to categorize the databases. Network faults are guaranteed to happen in distributed systems, and we must deal with this in any distributed systems. You can read more on this in “Please stop calling databases CP or AP” by Martin Kleppmann. 🔹 BASE The ACID (Atomicity-Consistency-Isolation-Durability) model used in relational databases is too strict for NoSQL databases. The BASE principle offers more flexibility, choosing availability over consistency. It states that the states will eventually be consistent. 🔹 SOLID SOLID principle is quite famous in OOP. There are 5 components to it. 1. SRP (Single Responsibility Principle) Each unit of code should have one responsibility. 2. OCP (Open Close Principle) Units of code should be open for extension but closed for modification. 3. LSP (Liskov Substitution Principle) A subclass should be able to be substituted by its base class. 4. ISP (Interface Segregation Principle) Expose multiple interfaces with specific responsibilities. 5. DIP (Dependency Inversion Principle) Use abstractions to decouple dependencies in the system. 🔹 KISS "Keep it simple, stupid!" is a design principle first noted by the U.S. Navy in 1960. It states that most systems work best if they are kept simple. Over to you: Have you invented any acronyms in your career? --  Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips  .

    • No alternative text description for this image

Similar pages

Browse jobs