Redis high-availability cluster construction, configuration, operation and maintenance and application!

Redis high-availability cluster construction, configuration, operation and maintenance and application!

Preface

Now that Redis is becoming more and more popular, it is almost used in many projects. When you use Redis, have you ever thought about how Redis provides services with stability and high performance?

You can also try to answer these questions:

  • The scenario where I use Redis is very simple. Will there be any problems if I only use the stand-alone version of Redis?
  • My Redis is down due to failure, what should I do if my data is lost? How can we ensure that our business applications are not affected?
  • Why do we need a master-slave cluster? What are its advantages?
  • What is a sharded cluster? Do I really need a sharded cluster?

If you already have some understanding of Redis, you must have heard of the concepts of data persistence, master-slave replication, and sentinels . What are the differences and connections between them?

If you have such doubts, in this article, I will go from 0 to 1, and then from 1 to N, and take you step by step to build a stable and high-performance Redis cluster.

In this process, you can learn about the optimization schemes Redis has adopted to achieve stability and high performance, and why do you want to do it?

Master these principles, so that you can "work with ease" when using Redis.

Redis interview questions , this article is a lot of dry goods, I hope you can read it patiently.

Start from the simplest: stand-alone Redis

1. we start with the simplest scenario.

Suppose you have a business application and you need to introduce Redis to improve the performance of the application. At this time, you can choose to deploy a stand-alone version of Redis to use, like this:

This architecture is very simple. Your business application can use Redis as a cache, query data from MySQL, and then write it to Redis, and then business applications will read the data from Redis, because Redis data is stored in memory In, so this speed is fast.

If your business volume is not large, then this architecture model can basically meet your needs. Is not it simple?

As time goes by, your business volume gradually develops, and more and more data are stored in Redis. At this time, your business applications rely more and more on Redis.

Redis is down for some reason

However, suddenly one day, your Redis is down for some reason. At this time, all your business traffic will hit the backend MySQL. This will cause your MySQL pressure to increase sharply, and in severe cases, it will even overwhelm MySQL. .

What should you do at this time?

I guess your plan is to quickly restart Redis so that it can continue to provide services.

How to choose Redis data loss

However, because the data in Redis before is in memory, even if you restart Redis now, the previous data is also lost. Although Redis can work normally after restarting, because there is no data in Redis, business traffic will still hit the backend MySQL, and MySQL is still under great pressure.

This is how to do? You are lost in thought.

Is there any good way to solve this problem?

Since Redis only stores data in memory, is it possible to write a copy of this data to disk?

In this way, when Redis restarts, we quickly restore the data in the disk to the memory, so that it can continue to provide services normally.

Yes, this is a good solution. The process of writing memory data to disk is "data persistence".

Data Persistence: Be Prepared

Now, the Redis data persistence you envision is like this:

But what should be done specifically for data persistence?

I guess the easiest solution you can think of is that every time Redis performs a write operation, in addition to writing to the memory, it also writes a copy to the disk, like this:

Yes, this is the simplest and most straightforward solution.

But think about it carefully. There is a problem with this scheme: each write operation of the client needs to write to the memory and to the disk, and the time to write to the disk is definitely much slower than writing to the memory! This is bound to affect the performance of Redis.

Redis performance is affected? How to circumvent this problem?

We can optimize this way: Redis writes the memory by the main thread, and returns the result to the client after the memory is written, and then Redis uses another thread to write to the disk, so as to avoid the performance impact of the main thread's writing to the disk.

This is indeed a good plan. In addition, we can change the angle and think about what other ways can persist data?

At this time, you have to consider the usage scenarios of Redis.

Redis usage scenarios

Recall, when we use Redis, what scenarios do we usually use it for?

Yes, cache.

Using Redis as a cache means that even though the full amount of data is not saved in Redis, our business applications can still query the back-end database to get results for the data that is not in the cache, but the speed of querying the back-end data will be a little slower. , But it has no effect on business results.

Based on this feature, our Redis data persistence can also be done in a "data snapshot" way.

So what is a data snapshot?

Simply put, you can understand this:

  • You think of Redis as a water cup, and writing data to Redis is equivalent to pouring water into this cup
  • At this time, you take a photo of this water cup with a camera. At the moment of taking the photo, the water volume in the water cup recorded in the photo is the data snapshot of the water cup

In other words, Redis data snapshot is to record the data in Redis at a certain moment, and then only need to write this data snapshot to disk.

Its advantage is that it only writes data to the disk "one-time" when it needs to be persisted, and does not need to manipulate the disk at other times.

Based on this scheme, we can periodically take data snapshots for Redis and persist the data to disk.

The persistence solution is the "RDB" and "AOF" of Redis:

In fact, the persistence solutions mentioned above are the "RDB" and "AOF" of Redis:

  • RDB: Only persist the data snapshot at a certain moment to disk (create a child process to do it)
  • AOF: Every write operation is persisted to the disk (the main thread writes to the memory, and the main thread or the child thread can be configured for data persistence according to the strategy)

In addition to the differences mentioned above, they also have the following characteristics :

  • RDB uses binary + data compression to write to disk, so that the file size is small and the data recovery speed is also fast
  • AOF records every write command, the data is the most complete, but the file size is large, and the data recovery speed is slow

If you are asked to choose a persistence scheme, you can choose like this:

  • If your business is not sensitive to data loss, use the RDB solution to persist data
  • If your business requires high data integrity, use the AOF solution to persist data

Assuming that your business has relatively high requirements for Redis data integrity and you choose the AOF solution, then you will encounter these problems again at this time:

  • AOF records every write operation. As time increases, the volume of AOF files will become larger and larger
  • Such a large AOF file becomes very slow during data recovery

How can this be done? Data integrity requirements have become higher, and data recovery has become more difficult? Is there any way to reduce the file size? How about improving the recovery speed?

We continue to analyze the characteristics of AOF.

Features of AOF

Since every write operation is recorded in the AOF file, but the same key may be modified multiple times, we only keep the last modified value, is it okay?

Yes, this is the "AOF rewrite" we often hear, and you can also understand it as AOF "slimming".

We can rewrite the AOF file regularly to avoid the continuous expansion of the file volume, so that the recovery time can be shortened during recovery.

Thinking further, is there any way to continue shrinking the AOF file?

The characteristics of RDB and AOF:

Recall what we mentioned earlier, the respective characteristics of RDB and AOF:

  • RDB is stored in binary + data compression mode, and the file size is small
  • AOF records every write command with the most complete data

Can we use their respective advantages?

Of course, this is the "hybrid persistence" of Redis.

Specifically, when AOF rewrite, Redis first writes a data snapshot in the AOF file in RDB format, and then appends each write command generated during this period to the AOF file. Because RDB is written in binary compression, the AOF file size becomes smaller.

At this point, when you use the AOF file to recover data, the recovery time will be shorter!

Redis 4.0 and above only support mixed persistence.

With such optimization, your Redis no longer has to worry about instance downtime. When a downtime occurs, you can quickly restore the data in Redis using persistent files.

But is that okay?

Think about it carefully. Although we have optimized the persistent files to the minimum, it still takes time to restore the data. During this period, your business applications will still be affected. What should we do?

Let's analyze whether there is a better solution.

An instance downtime can only be solved by restoring the data. Can we deploy multiple Redis instances and then keep these instance data synchronized in real time, so that when one instance goes down, we choose one of the remaining instances to continue Just provide services.

That's right, this solution is the "master-slave replication: multiple copies" to be discussed next.

Master-slave replication: multiple copies

At this point, you can deploy multiple Redis instances, and the architecture model becomes like this:

Here we call the node that reads and writes in real time the master, and the other node that synchronizes data in real time is called the slave.

The advantages of using multiple copies are:

  • Shorten the unavailability time: if the master is down, we can manually upgrade the slave to the master to continue to provide services
  • Improve read performance: Let slaves share part of read requests to improve overall application performance

This solution is good. It not only saves data recovery time, but also improves performance. Is there any problem with it?

You can think about it.

In fact, its problem is that when the master goes down, we need to "manually" upgrade the slave to the master, and this process also takes time.

Although it is much faster than restoring data, it still requires manual intervention. Once manual intervention is required, the response time and operation time of the person must be counted. Therefore, your business applications will still be affected during this period.

how to solve this problem? Can we turn this switching process into automation?

For this situation, we need a "failure automatic switchover" mechanism, which is the ability of the "sentinel" we often hear.

Sentry: automatic failover

Now, we can introduce an "observer" and let this observer monitor the health of the master in real time. This observer is the "sentinel".

How to do it in detail?

  • The sentry asks the master if it is normal at intervals
  • The master responds normally, indicating that the status is normal, and the response timeout indicates abnormality
  • The sentry found an abnormality and initiated a master-slave switch

With this solution, there is no need for human intervention, and everything becomes automated. Isn't it cool?

But there is another problem here. If the master status is normal, but when the sentry asks the master, there is a problem with the network between them, then the sentry may misjudge.

How to solve this problem?

The answer is that we can deploy multiple sentries and distribute them on different machines. They monitor the status of the master together, and the process becomes like this:

  • Multiple sentinels ask the master whether it is normal at intervals
  • The master responds normally, indicating that the status is normal, and the response timeout indicates abnormality
  • Once a sentinel determines that the master is abnormal (regardless of whether it is a network problem), ask the other sentinels. If multiple sentinels (set a threshold) consider the master to be abnormal, then it is determined that the master is indeed malfunctioning
  • After multiple sentinels negotiate and determine that the master is faulty, they initiate a master-slave switchover

Therefore, we use multiple sentinels to negotiate with each other to determine the status of the master. In this way, the probability of misjudgment can be greatly reduced.

After the sentinel negotiates to determine that the master is abnormal, there is another question: which sentry initiates the master-slave switch?

The answer is that a sentinel "leader" is selected, and this leader performs the master-slave switch.

The question is again, how to choose this leader?

Imagine how elections are done in real life?

Yes, vote.

When electing the leader of the sentinel, we can formulate such an election rule:

  • Each sentry asks other sentries and asks the other to vote for themselves
  • Each sentry can only vote for the first sentry who requested to vote, and can only vote once
  • 1. the sentinel who got more than half of the votes was elected as the leader and initiated the master-slave switch

In fact, this election process is what we often hear: the "consensus algorithm" in the field of distributed systems.

What is a consensus algorithm?

We deploy sentries on multiple machines, and they need to work together to complete a task, so they form a "distributed system."

In the field of distributed systems, the algorithm for how multiple nodes reach a consensus on a problem is called a consensus algorithm.

In this scenario, multiple sentinels negotiate together to elect an approved leader, which is done using a consensus algorithm.

This algorithm also stipulates that the number of nodes must be an odd number, so as to ensure that even if a node in the system fails, the remaining "half" of the nodes are in normal state, and the correct result can still be provided, that is, the algorithm is compatible. There is a faulty node.

There are many consensus algorithms in the field of distributed systems, such as Paxos, Raft, and the scenario of sentinel election leaders. The Raft consensus algorithm is used because it is simple enough and easy to implement.

Now, we use multiple sentinels to monitor the status of Redis. In this way, we can avoid the problem of misjudgment. The architecture model becomes like this:

Okay, let's summarize here first.

Your Redis is optimized from the simplest stand-alone version, through data persistence, master-slave multiple copies, and sentinel clusters. Your Redis is getting higher and higher in terms of performance and stability, even if the node fails. Don't worry anymore.

If your Redis is deployed in this architecture mode, it can basically run stably for a long time.

With the development of time, your business volume has begun to usher in explosive growth. At this time, can your architecture model still bear such a large amount of traffic?

Let's analyze it together:

  • Stability: Redis fails and goes down, we have sentinels + replicas, which can automatically complete the master-slave switch
  • Performance: As the number of read requests grows, we can deploy multiple slaves to separate reads and writes to share the read pressure
  • Performance: The number of write requests has grown, but we only have one master instance. What if this instance reaches the bottleneck?

Have you seen it, when your write request volume is getting larger and larger, a master instance may not be able to bear such a large amount of write traffic.

To solve this problem perfectly, you need to consider using "sharded clusters" at this time.

Sharded cluster: horizontal expansion

What is a "sharded cluster"?

To put it simply, one instance cannot bear the pressure of writing, then can we deploy multiple instances, and then organize these instances according to certain rules, treat them as a whole, and provide services to the outside world, so that it is not possible to write one instance in a centralized manner The bottleneck problem?

Therefore, the current architectural model becomes this:

Now the question is again, how to organize so many examples?

We make the following rules:

  • Each node stores part of the data, and the sum of all node data is the full amount of data
  • Develop a routing rule, for different keys, route it to a fixed instance for reading and writing

The fragmented clusters can be divided into two categories according to the location of the routing rules:

  • Client Sharding
  • Server-side sharding

Client-side fragmentation means that the key routing rules are placed on the client side, which is as follows:

The disadvantage of this solution is that the client needs to maintain the routing rules, that is, you need to write the routing rules into your business code.

How to avoid coupling routing rules in business code?

You can optimize this way, encapsulate this routing rule into a module, and when you need to use it, just integrate this module.

This is the scheme adopted by Redis Cluster.

Redis Cluster has built-in sentinel logic, so there is no need to deploy sentinels.

When you use Redis Cluster, your business application needs to use the supporting Redis SDK. The routing rules are integrated in this SDK and you don't need to write it yourself.

Let's look at server sharding again.

This solution means that routing rules are not placed on the client side, but an "middle proxy layer" is added between the client and the server. This proxy layer is the Proxy we often hear.

The data routing rules are maintained on this Proxy layer.

In this way, you don't need to care about how many Redis nodes there are on the server side, you only need to interact with this Proxy.

Proxy will forward your request to the corresponding Redis node according to the routing rules. Moreover, when the cluster instance is not enough to support larger traffic requests, you can also expand the capacity horizontally and add new Redis instances to improve performance. All this is for you For the client, it is transparent and imperceptible.

The industry's open source Redis sharding cluster solutions, such as Twemproxy and Codis, are used in this solution.

Sharded clusters involve a lot of details during data expansion. This content is not the focus of this article, so I won t go into details for the time being.

At this point, when you use a sharded cluster, you can calmly face the greater traffic pressure in the future!

summary

Okay, let's summarize how we built a stable and high-performance Redis cluster step by step.

First of all, when using the simplest stand-alone version of Redis, we found that when Redis failed and went down, the data could not be recovered, so we thought of "data persistence", and persisted the data in memory to disk. , So that Redis can quickly recover data from the disk after restarting.

In the process of data persistence, we are faced with the problem of how to persist data to disk more efficiently. Later, we found that Redis provides two solutions, RDB and AOF, which correspond to data snapshots and real-time command records respectively. When we have low requirements for data integrity, we can choose the RDB persistence solution. If you have higher requirements for data integrity, you can choose the AOF persistence solution.

But we also found that the volume of AOF files will grow larger and larger over time. At this time, the optimization solution we thought of was to use AOF rewrite to slim down and reduce the file size. Later, we found that we can Combining the respective advantages of RDB and AOF, the "hybrid persistence" method that combines the two is used when AOF rewrites, which further reduces the AOF file size.

Later, we found that although data can be restored through data recovery, it takes time to restore data, which means that business applications will still be affected. We have further optimized and adopted the "multiple copy" solution to keep multiple instances synchronized in real time. When one instance fails, other instances can be manually upgraded to continue to provide services.

However, there are also problems with this. Manually upgrading the instance requires manual intervention, and manual intervention also takes time. We began to think of ways to automate this process, so we introduced the "sentinel" cluster. The sentry cluster negotiated with each other. Find the faulty node, and can automatically complete the switch, so that the impact on business applications is greatly reduced.

Finally, we focused our attention on how to support greater write traffic. Therefore, we introduced a "sharded cluster" to solve this problem, allowing multiple Redis instances to share the write pressure and face greater traffic in the future. We can also add new instances, scale horizontally, and further improve the performance of the cluster.

At this point, our Redis cluster can provide long-term stable and high-performance services for our business.

Here I drew a mind map to help you better understand the relationship between them and the evolution process.

postscript

Seeing this, I think you should have your own opinions on how to build a stable and high-performance Redis cluster . The real Redis interview questions . In fact, the optimization ideas mentioned in this article are based on the theme of "architecture" The core idea of "Design":

  • High performance: read-write separation, sharded cluster
  • High availability: data persistence, multiple copies, automatic failover
  • Easy to expand: sharded cluster, horizontal expansion

When we talk about sentinel clusters and sharded clusters, this also involves knowledge about "distributed systems":

  • Distributed Consensus: Sentinel Leader Election
  • Load balancing: sharded cluster data sharding, data routing

Of course, with the exception of Redis, you can use this idea to think and optimize for building any data cluster, and see how they do it.

For example, when you are using MySQL, you can think about the difference between MySQL and Redis? What does MySQL do to achieve high performance and high availability? In fact, the ideas are similar.

We now see distributed systems and data clusters everywhere. I hope that through this article, you can understand how these software evolved step by step, and what problems they encountered during the evolution process. In order to solve these problems, the software What kind of scheme did the designer design and what trade-offs did he make?

As long as you understand the principles and master the ability to analyze and solve problems, you can quickly find the "key points" in the future development process or when learning other excellent software, and master it in the shortest time. And can play their advantages in practical applications.

In fact, this thinking process is also the idea of "architecture design". When doing software architecture design, the scenario you face is to discover, analyze, and solve problems, evolve and upgrade your architecture step by step, and finally achieve a balance between performance and reliability. Although all kinds of software are emerging in an endless stream, the thoughts of architecture design will not change. I hope you really absorb these thoughts, so that you can adapt to the changes.