Why is Redis so fast even though it is single-threaded..?

Why is Redis so fast even though it is single-threaded..?

Reason 1: Pure memory operation

Redis is an in-memory database. Its data is all stored in memory, which means that our reading and writing of data are all completed in memory. This speed is very fast.

Not only that, Redis is also a key-value in-memory database. It internally constructs a hash table. When accessing according to a specified KEY, the corresponding data can be found with only O(1) time complexity.

Reason 2: Rich data types



Reason 3: Use I/O multiplexing technology

Redis uses a single thread, so how does it handle multiple client connection requests?

Redis uses I/O multiplexing technology and non-blocking I/O. This technology is provided by the operating system implementation. Redis can conveniently use the operating system’s API.

The so-called I/O multiplexing (I/O Multiplexing) is actually a synchronous I/O model used to simultaneously monitor multiple file descriptors (File Descriptor) to determine which one or which descriptors can perform I/O operations (such as reading, writing, etc.).
Its core idea is that through a system call (such as select, poll, epoll, etc.), the program can simultaneously block and wait for the ready state of multiple I/O operations. When one or more descriptors are ready, the system call returns and tells the program which descriptors are ready for I/O operations. Then the program can perform corresponding I/O processing for these ready descriptors.
Compared with the traditional blocking I/O model, I/O multiplexing can handle multiple I/O operations in a single thread, avoiding creating a thread for each I/O operation, thereby improving the utilization rate and concurrency of system resources.

Because of this, Redis can listen to requests from multiple Sockets in a single thread. When any Socket is readable/writable, Redis reads the client request, operates the corresponding data in memory, and then writes it back to the Socket.


The whole process is extremely efficient. Redis utilizes the event-driven model of I/O multiplexing technology to ensure that it only reacts to active Sockets when monitoring multiple Socket connections.

Reason 4: Non-CPU-intensive tasks

The disadvantage of using a single thread is obvious. It cannot utilize multi-core CPUs. The author of Redis mentioned that since most operations of Redis are not CPU-intensive tasks (referring to those tasks that mainly rely on CPU computing power to complete), and the bottleneck of Redis lies in memory and network bandwidth. Under high-concurrency requests, Redis requires more memory and higher network bandwidth. Otherwise, the bottleneck is likely to occur in situations where there is insufficient memory or network latency waiting. Of course, if you think the performance of a single Redis instance is insufficient to support business, the author of Redis recommends deploying multiple Redis nodes and forming a cluster to utilize the capabilities of multi-core CPUs instead of using multiple threads to process on a single instance.

Reason 5: Advantages of single-threaded model

Based on the above characteristics, Redis using a single thread is already sufficient to achieve very high performance. However, the single-threaded model itself has the following advantages:

  • There is no performance loss due to multi-thread context switching.
  • There is no performance loss due to the need for locking when accessing shared resources.
  • It is very friendly for development and debugging and has high maintainability.

Based on these aspects, Redis finally adopts a single-threaded model to complete the entire request processing work.

Multi-threaded Optimization

As already specifically stated at the beginning of the article, Redis Server itself is multi-threaded.

Apart from the request processing flow being handled by a single thread, there are other worker threads inside Redis that execute in the background. They are responsible for asynchronously executing certain relatively time-consuming tasks. For example, AOF flushing every second and AOF file rewriting are completed in another thread.

After Redis 4.0, Redis introduced the lazyfree mechanism and provided commands such as unlink, flushall async, flushdb async, and mechanisms such as lazyfree-lazy-eviction and lazyfree-lazy-expire to release memory asynchronously. This is mainly to solve the performance problem of the entire Redis being blocked when releasing large memory data.

When deleting large keys, releasing memory is often time-consuming. Therefore, Redis provides an asynchronous way to release memory. Putting these time-consuming operations in another thread for asynchronous processing does not affect the execution of the main thread and improves performance.

In Redis 6.0, Redis introduced multi-threading again to complete the protocol parsing of request data to further enhance performance. It mainly addresses the pressure brought by single-threaded protocol parsing of request data in high-concurrency scenarios. After the protocol parsing of request data is completed by multiple threads, the subsequent request processing stage is still handled in a single-threaded queue.

It can be seen that Redis does not conservatively think that a single thread is so good, nor does it introduce multi-threading just for the sake of using multi-threading. The author of Redis is very clear about the usage scenarios of single-threaded and multi-threaded models and makes targeted optimizations. This is very worthy of our learning.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics