Interview: Chapter 5: Intermediate Frequently Asked Questions

Interview: Chapter 5: Intermediate Frequently Asked Questions

Spring **** features

The core features of Spring are IOC and AOP , IOC (Inversion of Control), that is, "inversion of control"; AOP (Aspect-Oriented Programming), that is, "Aspect-Oriented Programming".

IOC : IOC, another term is called DI (Dependency Injection), that is, dependency injection. It is not a technical realization, but a design idea. In any program project with practical development significance, we will use many classes to describe their unique functions, and complete specific business logic through mutual cooperation between classes. At this time, each class needs to be responsible for managing the references and dependencies of the classes that interact with it, and the code will become extremely difficult to maintain and extremely high coupling. The emergence of IOC is used to solve this problem. We hand over the creation and coordination of these interdependent objects to the Spring container through IOC, and each object only needs to pay attention to its own business logic relationship. From this perspective, the way to obtain dependent objects has been reversed, and the spring container controls how objects obtain external resources (including other objects, documents, etc.).

AOP : Aspect-oriented programming is often defined as a technology that promotes separation of concerns in software systems. The system is composed of many different components, each of which is responsible for a specific function. In addition to implementing their own core functions, these components often assume additional responsibilities. For example, core services such as logging, transaction management, and security are often integrated into components with core business logic. These system services are often referred to as cross-cutting concerns because they span multiple components of the system.

J DBC 's understanding

JDBC (Java DataBase Connectivity, java database connection) is a Java API used to execute SQL statements, which can provide unified access to a variety of relational databases. It consists of a set of classes and interfaces written in Java language . JDBC provides a benchmark from which more advanced tools and interfaces can be built to enable database developers to write database applications

With JDBC, it is very easy to send SQL statements to various relational data. In other words, with the JDBC API, there is no need to write a program for accessing Sybase database, another program for accessing Oracle database , or another program for accessing Informix database, etc., programmers only need to write one with JDBC API The program is enough, it can send SQL calls to the corresponding database.

A jax asynchronous and synchronous

Synchronization refers to a communication method in which the sender sends data and waits for the receiver to send back a response before sending the next data packet. 

Asynchronous means: after the sender sends data, it does not wait for the receiver to send back a response, and then sends the next data packet.

Synchronous communication requires both parties to communicate with the same clock frequency and coordinate accurately. By sharing a single clock or timing pulse source, the sender and receiver can be accurately synchronized, and the efficiency is high; 

Asynchronous communication does not require the two parties to be synchronized. The sender and receiver can use their own clock sources. Both parties follow the asynchronous communication protocol, using characters as the data transmission unit. The time interval for the sender to transmit characters is uncertain, and the transmission efficiency is lower than that of synchronous transmission.

Consumers can implement service calls synchronously or asynchronously. From the user's point of view, the differences between the two methods are: 

Synchronization-The user calls the service through a single thread; the thread sends a request, blocks while the service is running, and waits for a response. 

Asynchronous-The user calls the service through two threads; one thread sends the request, and another separate thread receives the response.

Spike activity

Spike architecture design concept

Limit flow: Since only a small number of users can successfully kill in seconds, it is necessary to limit most of the traffic and only allow a small part of the traffic to enter the back end of the service.

Peak clipping: There will be a large influx of users in the spike system instantaneously, so there will be a high instantaneous peak at the beginning of the panic. High peak flow rate is a very important reason for overwhelming the system, so how to turn an instantaneous high flow rate into a stable flow rate for a period of time is also an important idea for designing a spike system. Commonly used methods to achieve peak shaving are to use technologies such as caching and message middleware.

Asynchronous processing: The spike system is a high-concurrency system. The asynchronous processing mode can greatly increase the concurrency of the system. In fact, asynchronous processing is a way to achieve peak shaving.

Memory cache: The biggest bottleneck of the spike system is generally database reads and writes. Since database reads and writes belong to disk IO, the performance is very low. If part of the data or business logic can be transferred to the memory cache, the efficiency will be greatly improved.

Scalable: Of course, if we want to support more users and greater concurrency, it is best to design the system to be flexible and expandable. If traffic comes, just expand the machine. During Double such as Taobao and, a large number of machines will be added to deal with peak transactions.

Front-end solution

Browser side (js):

Page static: Make all static elements on the active page static, and minimize dynamic elements. Anti-peak through CDN.

Repetitive submission is prohibited: the button is grayed out after the user submits, and resubmission is prohibited 

User current limit: users are only allowed to submit a request once in a certain period of time, for example, IP current limit can be adopted

Backend solution

Server controller layer (gateway layer)

Limit uid (UserID) access frequency: We intercepted browser access requests above, but for some malicious attacks or other plug-ins, the server control layer needs to limit the access frequency for the same access uid.

Service layer

Only a part of the access requests are intercepted above. When the number of users in the spike is large, even if each user has only one request, the number of requests to the service layer is still very large. For example, we have 100W users grabbing 100 mobile phones at the same time, and the concurrent request pressure of the service layer is at least 100W.

Use message queue to cache requests: Since the service layer knows that there are only 100 mobile phones in stock, there is no need to pass all 100W requests to the database. Then you can write these requests to the message queue cache first, and the database layer subscribes to messages to reduce the inventory. , The request for successful inventory reduction will return the success of the spike, and the failure of the request will return the end of the spike.

Use cache to respond to read requests: For ticket purchase services such as 12306, which are typical read-more-write-less services, most of the requests are query requests, so the cache can be used to share database pressure.

Use the cache to respond to write requests: The cache can also respond to write requests. For example, we can transfer the inventory data in the database to the Redis cache. All inventory reduction operations are performed in Redis, and then the users in Redis are transferred through the background process. The spike request is synchronized to the database.

Database layer

The database layer is the most fragile layer. Generally, requests must be intercepted upstream during application design. The database layer only undertakes access requests "within the capability". Therefore, the above introduces queues and caches in the service layer, so that the bottommost database can sit back and relax.

Case: Using message middleware and caching to implement a simple spike system

Redis is a distributed cache system that supports multiple data structures. We can easily implement a powerful spike system using Redis.

We can use Redis's simplest key-value data structure, using an atomic variable value (AtomicInteger) as the key, and the user id as the value, and the inventory quantity is the maximum value of the atomic variable. For the spike of each user, we use the RPUSH key value to insert spike requests. When the number of spike requests inserted reaches the upper limit, all subsequent insertions are stopped.

Then we can start multiple worker threads on the station, use the LPOP key to read the user id of the successful killer, and then operate the database to perform the final order reduction operation.

Of course, the above Redis can also be replaced with message middleware such as ActiveMQ, RabbitMQ, etc., or the cache and message middleware can be combined. The cache system is responsible for receiving and recording user requests, and the message middleware is responsible for synchronizing the requests in the cache to the database.

What should I do if I log in on another computer and change the password for single sign-on?

(Single Sign On), abbreviated as SSO, is one of the more popular solutions for enterprise business integration. The definition of SSO is that in multiple application systems, users only need to log in once to access all mutually trusted application systems.

When the user accesses the application system for the first time, because he has not logged in yet, he will be guided to the authentication system to log in; according to the login information provided by the user, the authentication system performs identity verification. If the verification is passed, it should be returned to the user. Authentication credential-ticket; when the user accesses other applications, he will bring this ticket as his own authentication credential. After the application system receives the request, it will send the ticket to the authentication system for verification and check the ticket legality. If the verification is passed, the user can access application system 2 and application system 3 without logging in again.

To implement SSO, the following main functions are required:

All application systems share an identity authentication system.
A unified authentication system is one of the prerequisites of SSO. The main function of the authentication system is to compare the user's login information with the user information database and perform login authentication for the user; after the authentication is successful, the authentication system should generate a unified authentication ticket (ticket) and return it to the user. In addition, the authentication system should also verify the validity of the ticket to determine its validity.

All application systems can identify and extract ticket information.
To achieve the SSO function, allowing users to log in only once, the application system must be able to identify users who have logged in. The application system should be able to identify and extract the ticket, and through the communication with the authentication system, it can automatically determine whether the current user has logged in, so as to complete the single sign-on function.

When the user logs in at another terminal and changes the password, the information attached to the corresponding ticket will be changed, causing the original ticket to become invalid because it cannot pass the verification. Therefore, the user is required to log in again with a new password.

In our e-commerce project, the verification string used for single sign-on is called token. The ticket here means the ticket, which corresponds to the token we learned.

R after edis downtime, how the data in the shopping cart handle? How to relieve mysql pressure?

Use the *.rdb file saved in redis to restore it.

In addition, redis also has the AOF function, which can automatically restore to the previous query when it is started.

This will reduce data loss to a certain extent. But restarting redis will need to read data from the relational database, increasing the pressure on mysql.

According to the actual situation, if there is master-slave replication before redis, the data can be obtained on other nodes redis. If the company has no money, it can only temporarily restrict client access and restore redis data first.

How does dubbo work when Z ookeeper is on standby?

1. The role of Zookeeper:

        Zookeeper is used to register services and perform load balancing. Which service is provided by which machine must be known to the caller. Simply put, it is the correspondence between the IP address and the service name. Of course, this correspondence can also be implemented in the caller s business code by hard coding, but if the machine providing the service goes down, the caller will not know it, and if the code is not changed, it will continue to request the machine to provide services. Zookeeper can detect the hung machine through the heartbeat mechanism and delete the corresponding relationship between the IP and service of the hung machine from the list. As for supporting high concurrency, in simple terms, it is horizontal expansion, and the computing power can be improved by adding machines without changing the code. By adding a new machine to register the service with zookeeper, more service providers can serve more customers.

2. dubbo:

      It is a tool to manage the middle layer. There are a lot of service access and service providers need to schedule between the business layer and the data warehouse. Dubbo provides a framework to solve this problem.

      Note that the dubbo here is just a frame, as for what you put on the shelf is completely up to you, just like a car frame, you need to be equipped with your wheel engine. To complete scheduling in this framework, there must be a distributed registry to store the metadata of all services. You can use zk or other things, but everyone uses zk.

3. The relationship between zookeeper and dubbo:

      Dubbo abstracts the registry, so that it can be connected to different storage media to provide services to the registry, such as ZooKeeper, Memcached, Redis, etc.

      The introduction of ZooKeeper as a storage medium also brings in the features of ZooKeeper. The first is load balancing. The carrying capacity of a single registry is limited. When the traffic reaches a certain level, it needs to be offloaded. Load balancing exists for offloading. A ZooKeeper group with corresponding web applications can easily achieve load balancing. ; Resource synchronization, load balancing alone is not enough, data and resources between nodes need to be synchronized, ZooKeeper cluster naturally has such a function; naming services, the tree structure is used to maintain the global service address list, service providers At startup, write your own URL address to the designated node/dubbo/${serviceName}/providers directory on ZK, and this operation completes the service release. Other features include Mast elections, distributed locks, etc.

After MQ completes the order, it sends a message to lock the inventory. The message always fails.

How is the gateway implemented?

Is to define a Servlet to receive requests. Then go through preFilter (encapsulation request parameters), routeFilter (forwarding request), postFilter (output content). The three filters share request, response, and other global variables.

(1) Put request and response into threadlocal

(2) Execute three sets of filters

(3) Clear the environment variables in threadlocal

R EDIS and mysql data synchronization is redis delete or delete mysql?

Whether it is to write to the library first and then delete the cache; or delete the cache first and then write to the library, there may be data inconsistencies

Because writing and reading are concurrent, the order cannot be guaranteed. If the cache is deleted and there is no time to write to the library, another thread will read and find that the cache is empty, then go to the database to read the data and write it into the cache. Dirty data is in the cache. If the library is written first, and then before the cache is deleted, the thread that writes the library is down. If the cache is not deleted, data inconsistency will also occur. If it is a redis cluster, or master-slave mode, write master-reader-slave, because redis replication has a certain time delay, it may also cause data inconsistency.

At this time, consider deleting the database content first, and then deleting redis. Because real-time data such as inventory is directly read in the database, from a business logic perspective, we allow data cache errors during query, but do not allow errors in data during settlement.

Why is H ashmap thread safe and how to make it thread safe

When the HashMap is put, if the inserted element exceeds the range of the capacity (determined by the load factor), the expansion operation will be triggered, which is rehash, which will re-hash the contents of the original array to the new expanded array. In multi-threaded In the environment, there are other elements in the put operation at the same time. If the hash value is the same, it may be represented by a linked list in the same array at the same time, resulting in a closed loop, resulting in an infinite loop during get, so HashMap is thread-unsafe.

Use the java.util.Hashtable class, which is thread-safe.

Use java.util.concurrent.ConcurrentHashMap, this class is thread safe.

Use the java.util.Collections.synchronizedMap() method to wrap the HashMap object to obtain a thread-safe Map and perform operations on this Map.

S Pring principle Cloud Gateway

Eureka is the registry in the microservice architecture, which is responsible for service registration and discovery. Eureka Client component, this component is responsible for registering the information of this service to Eureka Server. Eureka Server is a registry with a registry that stores the machine and port number of each service.

Zuul, which is the microservice gateway. This component is responsible for network routing. And after having a gateway, there are many benefits, such as unified downgrade, current limiting, authentication and authorization, security, and so on.

How D ubbo+zookeeper clusters

Dubbo is Alibaba's open source distributed service framework. Its biggest feature is that it is structured in a layered manner. In this way, the various layers can be decoupled (or loosely coupled to the maximum). From the perspective of service model, Dubbo adopts a very simple model, either the provider provides the service or the consumer consumes the service, so based on this, the service provider and service consumer can be abstracted (Consumer) Two roles.

ZooKeeper is a top-level project of Apache. It provides efficient and highly available distributed coordination services for distributed applications. It provides distributed foundations such as data publishing/subscription, load balancing, naming services, distributed coordination/notification, and distributed locks. service. Due to its convenient use, excellent performance and good stability, ZooKeeper is widely used in large distributed systems such as Hadoop, HBase, Kafka and Dubbo.

Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server; Nginx can be used as an HTTP server for website publishing processing, and Nginx can be used as a reverse The proxy implements load balancing.

How to reflect the design pattern in the project

1. The template method pattern 
defines the skeleton of an algorithm in operation, and delays some steps to subclasses, such as JdbcTemplate 
The proxy mode of spring is reflected in aop 
3. Observer 
defines a pair of objects With multiple dependencies, when the state of an object changes, all objects that depend on it are notified and automatically updated. 
The common place of Observer pattern in spring is the realization of listener. Such as ApplicationListener. 
4. Adapter (Adapter) 
MethodBeforeAdviceAdapter class 
5. Strategy pattern 
uses java inheritance and polymorphism. 
Case 1: Addition and subtraction calculator, define a calculation class interface, add and subtract classes to implement it, and pass in the addition object when adding . 
Case 2: Export excel, pdf, when the word, different objects are created 
simple to understand: When you perform a number of things, to create multiple objects 
6, Singleton pattern 
to solve the class to create a global frequent use and destruction of 
7, factory pattern 
points There are three types: simple factory, factory method, and abstract factory 
to produce "products" according to "demand", decoupling "demand", "factory" and "product".

Simple factory: The product is produced by the logo passed in during construction. Different products are produced in the same factory. Every time a new product is added, the factory type needs to be changed to judge. This judgment will increase with the increase of products. , Which brings trouble to expansion and maintenance. 
Simple factory project case: According to the different incoming (for example, 1 corresponds to payment flow, 2 corresponds to order flow), generate different types of serial numbers

Factory method: (Delay the use of a class to the subclass) 
The factory class reflects the instance according to the incoming A.class type. 
Product interface, product category A, product category B, and factory category can generate different product category objects. If you want to increase with the increase of products, and the factory category remains unchanged, you only need to add a product category C. 
Project case: mail server, there are three protocols, POP3, IMAP, HTTP. After finishing these three product categories, define a factory method

Abstract factory: A factory produces multiple products, they are a product family, and the products of different product families are derived from different abstract products

M Q how to solve packet loss

The transaction mechanism means that before sending the message, open the transaction (channel.txSelect()), and then send the message. If there is any abnormality in the sending process, the transaction will roll back (channel.txRollback()), and if the transmission is successful, the transaction will be submitted. (channel.txCommit()). However, the disadvantage is that the throughput is reduced.

All messages published on the channel will be assigned a unique ID (starting from 1). Once the message is delivered to all matching queues, rabbitMQ will send an Ack to the producer (contains the unique ID of the message) , This allows the producer to know that the message has arrived at the destination queue correctly. If rabiitMQ fails to process the message, a Nack message will be sent to you, and you can retry the operation.

How distributed transactions are reflected in the project

One or two phase submission (2PC)

Two-phase submission of this solution is a part of the sacrifice of availability in exchange for consistency. In terms of implementation, in .NET, you can use the API provided by TransactionScop to programmatically implement two-phase submission in a distributed system. For example, WCF can implement this part of the function. However, between multiple servers, you need to rely on DTC to complete transaction consistency

** Advantages: ** As far as possible to ensure strong data consistency, it is suitable for key areas that require high data consistency. (In fact, there is no guarantee of 100% strong consistency)

** Disadvantages: ** The implementation is complicated, and availability is sacrificed, which has a greater impact on performance. It is not suitable for high-concurrency and high-performance scenarios. If a distributed system is called across interfaces, there is currently no implementation in the .NET world.

2. Compensation Affairs (TCC)

TCC is actually the compensation mechanism adopted. Its core idea is: for each operation, a corresponding confirmation and compensation (revocation) operation must be registered. It is divided into three stages:

The Try stage is mainly to test the business system and reserve resources

The Confirm phase is mainly to confirm and submit the business system. When the Try phase is executed successfully and the Confirm phase is started, the default Confirm phase will not make mistakes. That is: As long as the Try is successful, Confirm must be successful.

The Cancel phase is mainly to cancel the business executed in the state of business execution error and need to be rolled back, and the reserved resources are released.

** Advantages: ** Compared with 2PC, the implementation and process are relatively simple, but the data consistency is worse than 2PC.

** Disadvantages: ** The disadvantages are quite obvious, and it may fail in steps 2 and 3. TCC is a compensation method of the application layer, so programmers need to write a lot of compensation code when implementing it. In some scenarios, some business processes may not be well defined and processed by TCC

3. Local message table (asynchronous guarantee) (the most used technical solution)

The message producer needs to build an additional message table and record the message sending status. The message table and business data must be submitted in one transaction, which means that they must be in a database. Then the message will be sent to the consumer of the message via MQ. If the message fails to be sent, it will be retried.

The message consumer needs to process the message and complete its own business logic. At this time, if the local transaction is successfully processed, it indicates that the processing has been successful, and if the processing fails, the execution will be retried. If it is a business failure, you can send a business compensation message to the producer to notify the producer to perform operations such as rollback.

The producer and consumer periodically scan the local message table, and send the unprocessed messages or failed messages again.

4. MQ transaction message

There are some third-party MQs that support transactional messages, such as RocketMQ. The way they support transactional messages is similar to the two-phase commit, but some mainstream MQs on the market do not support transactional messages, such as RabbitMQ and Kafka. not support.

Taking Ali's RocketMQ middleware as an example, the idea is roughly as follows:

In the first stage of the Prepared message, the address of the message will be obtained.
The second stage executes local transactions, and the third stage uses the address obtained in the first stage to access the message and modify the state.

That is to say, in the business method, you need to submit two requests to the message queue, once to send the message and once to confirm the message. If the confirmation message fails to be sent, RocketMQ will periodically scan the transaction messages in the message cluster. When it finds a Prepared message, it will confirm to the message sender, so the producer needs to implement a check interface, and RocketMQ will follow the strategy set by the sender. Decide whether to roll back or continue sending confirmation messages. This ensures that the message sending and the local transaction succeed or fail at the same time.

S Pring managed bean scope, why not G C treatment?

When a Bean instance is created through the spring container, not only the instantiation of the Bean instance can be completed, but also a specific scope can be specified for the Bean. Spring supports the following five scopes:

singleton: Singleton mode, in the entire Spring IoC container, there will be only one instance of the Bean defined using singleton

Prototype: prototype mode, each time the Bean defined by prototype is obtained through the getBean method of the container, a new Bean instance will be generated

request: For each HTTP request, the Bean defined using request will generate a new instance, that is, each HTTP request will generate a different Bean instance. This scope is valid only when Spring is used in a web application

session: For each HTTP Session, a new instance is generated using the bean milk defined by the session. The same scope is valid only when Spring is used in a web application

globalsession: For each global HTTP Session, a new instance will be generated using the Bean defined by the session. Typically, it only works when portlet context is used. The same scope is valid only when Spring is used in a web application

Among them, the two scopes of singleton and prototype are more commonly used. For a singleton-scoped bean, every time you request the bean, you will get the same instance. The container is responsible for tracking the state of the Bean instance, and is responsible for maintaining the life cycle behavior of the Bean instance; if a Bean is set to the prototype scope, each time the program requests the bean with that id, Spring will create a new Bean instance and then return it to the program. In this case, the Spring container only uses the new keyword to create a Bean instance. Once the creation is successful, the container will not track the instance and will not maintain the state of the Bean instance.

If you do not specify the scope of the Bean, Spring uses the singleton scope by default. When Java creates a Java instance, it needs to apply for memory; when it destroys the instance, it needs to complete garbage collection. These tasks will lead to an increase in system overhead. Therefore, the creation and destruction of prototype scope beans are relatively expensive. Once a singleton scoped Bean instance is created successfully, it can be reused. Therefore, unless necessary, try to avoid setting the Bean to prototype scope.

The bottom layer of spring uses map to store bean entities, and the key value of map is a strong reference, so it will not be GC and can be reused

In the process of uploading a picture, whether the picture is compressed on the front end or on the back end

The front end reduces the server pressure and reduces the amount of data transmitted from the very beginning.

The concrete realization of distributed transaction? What modules are used?

Realization: In front of


R How edis docking and MySQL

Redis is used to implement data reading and writing, and the queue processor is used to write data to mysql at regular intervals. The main problem in this situation is how to ensure data synchronization between mysql and redis. The key to data synchronization between the two is the primary key in the mysql database. When redis is started, go to mysql to read all table key values and store them in redis. When writing data to redis, the redis primary key is incremented and read. If the mysql update fails, you need to clear the cache and synchronize the redis primary key in time.

S pring **** aop comment

Set transaction isolation level

Multithreading problem (principle) how to view the thread after Stop

How S pring parses beans

The so-called bean parsing is to parse out the beans in our xml file. The entry above sees that the ClassPathXmlApplicationContext is used to obtain the ApplicationContext. Therefore, the entry of the analysis starts with the corresponding constructor in the ClassPathXmlApplicationContext class.

The getBean() method starts the creation process, getBean() has a series of overloaded methods, and finally all calls the doGetBean() method

The getSingleton method tries to get the singleton bean from the cache

If the current bean is a singleton and the cache does not exist, create a singleton object through the getSingleton(String beanName, ObjectFactory<?> singletonFactory) method

It mainly includes the following three main methods:




The createBeanInstance method is used to create a Bean instance

The populateBean method mainly injects attributes into the Bean

The initializeBean method mainly handles various callbacks

Why InnoDB supports transactions but myisam does not

MyISAM: This is the default type, it is based on the traditional ISAM type, ISAM is the abbreviation of Indexed Sequential Access Method (indexed sequential access method), it is a standard method for storing records and files. Compared with other storage engines, MyISAM has Most tools for checking and repairing tables. MyISAM tables can be compressed, and they support full-text search. They are not transaction-safe and do not support foreign keys. If things are rolled back, they will cause an incomplete rollback, which is not atomic. If you perform a large number of SELECTs, MyISAM is a better choice.

InnoDB: This type is transaction-safe. It has the same characteristics as the BDB type, and they also support foreign keys. InnoDB tables are fast. It has richer features than BDB, so if you need a transaction-safe storage engine, it is recommended Use it. If your data performs a lot of INSERT or UPDATE, for performance reasons, you should use InnoDB tables

Idempotence prevents repeated orders

1. Token mechanism to prevent repeated page submission 

2. Unique index to prevent new dirty data 

3. Pessimistic lock and optimistic lock mechanism

4. Distributed lock

M ysql can do except read and write separate cluster outside the cluster can also do what

Mysql has master-slave replication cluster and read-write separation cluster

Which projects reflect dynamic agents

Aop uses dynamic agents for aspects, jdk and cglig

Concurrency of C urrentHashmap Synchronized and writeevery where

How to achieve optimistic locking and pessimistic locking at the code level and sql level

S ql level:

1. pessimistic lock

    1. Exclusive lock, when the transaction is operating the data, this part of the data is locked, until the operation is completed and then unlocked, other transaction operations can operate the part of the data. This will prevent other processes from reading or modifying the data in the table.

    2. Implementation: In most cases, it depends on the lock mechanism of the database to achieve

     Generally use select...for update to lock the selected data, such as select * from account where name= Max for update, this sql statement locks all the accounts in the account table that meet the search conditions (name= Max ")record of. Before the transaction is committed (the lock in the transaction process will be released when the transaction is committed), the outside world cannot modify these records.

2. optimistic lock

    1. If someone updates before you, your update should be rejected and you can let the user re-operate.

    2. Implementation: Most are implemented based on the data version (Version) recording mechanism

     Specifically, it can be realized by adding a version number or a timestamp field to the table. When reading data, the value of the version field is read together. Each time the data is updated, the version value is increased by one. When we submit the update, we judge the size of the current version information and the version value retrieved for the first time. If the current version number of the database table is equal to the version value retrieved for the first time, then update it, otherwise it is considered expired data and rejected Update, let users re-operate.

Code level:

Pessimistic lock: A piece of execution logic plus pessimistic lock.When different threads execute at the same time, only one thread can execute, and other threads wait at the entrance until the lock is released.

Optimistic lock: A piece of execution logic plus optimistic lock. When different threads are executing at the same time, they can enter execution at the same time. When the data is finally updated, it is necessary to check whether the data has been modified by other threads (whether the version is the same at the beginning of the execution). Update, otherwise give up this operation.

What is the difference between the lock after J dk1.7 and 1.8

MySQL stored procedure

SQL statements need to be compiled and then executed, and stored procedures (Stored Procedure) is a set of SQL statements in order to complete specific functions, compiled and stored in the database, the user by specifying the name of the stored procedure and given parameters (if the storage The procedure takes parameters) to call and execute it.

A stored procedure is a programmable function, created and saved in the database, and can be composed of SQL statements and control structures. When you want to perform the same function on different applications or platforms, or encapsulate specific functions, stored procedures are very useful. The stored procedure in the database can be seen as a simulation of the object-oriented method in programming, which allows to control the way data is accessed.

Advantages of stored procedures:

(1). Enhance the function and flexibility of SQL language: Stored procedures can be written with control statements, which has strong flexibility and can complete complex judgments and more complex calculations.

(2). Standard component programming: After the stored procedure is created, it can be called multiple times in the program without having to rewrite the SQL statement of the stored procedure. And database professionals can modify the stored procedure at any time without affecting the source code of the application.

(3). Faster execution speed: If an operation contains a large amount of Transaction-SQL code or is executed multiple times, then the execution speed of the stored procedure is much faster than that of batch processing. Because the stored procedure is pre-compiled. When querying a stored procedure for the first time, the optimizer analyzes and optimizes it, and gives an execution plan that is finally stored in the system table. However, batch Transaction-SQL statements must be compiled and optimized each time they are run, and the speed is relatively slow.

(4). Reduce network traffic: For operations (such as query and modification) of the same database object, if the Transaction-SQL statement involved in this operation is organized into a stored procedure, then when the stored procedure is called on the client computer , Only the call statement is transmitted in the network, which greatly reduces network traffic and reduces network load.

(5). Make full use of it as a security mechanism: by restricting the authority to execute a stored procedure, the corresponding data access authority can be restricted, avoiding unauthorized users' access to the data, and ensuring the data Security.

MySQL stored procedure creation


CREATE PROCEDURE procedure name ([[IN|OUT|INOUT] parameter name data type[,[IN|OUT|INOUT] parameter name data type...]]) [characteristics...] procedure body



MySQL uses ";" as the delimiter by default. If the delimiter is not declared, the compiler will treat the stored procedure as a SQL statement for processing, so the compilation process will report an error, so you must declare the current segment delimiter with "DELIMITER//" in advance. Let the compiler treat the content between the two "//" as the code of the stored procedure, and will not execute these codes; "DELIMITER;" means to restore the separator.


The stored procedure may have input, output, input and output parameters according to needs. If there are multiple parameters, use "," to separate them. The parameters of the MySQL stored procedure are used in the definition of the stored procedure. There are three types of parameters, IN, OUT, INOUT:

The value of the IN parameter must be specified when calling the stored procedure, the value of the parameter modified in the stored procedure cannot be returned, it is the default value

OUT: The value can be changed within the stored procedure and can be returned

INOUT: specified when calling, and can be changed and returned

Process body

The beginning and end of the process body are marked with BEGIN and END.

The difference and connection between S pring boot and spring cloud

Spring boot is a set of rapid configuration scaffolding of Spring , which can quickly develop a single microservice based on spring boot. Spring Boot, you can tell by its name that it is the boot of Spring, which is used to start Spring, making the learning and use of Spring fast and free. pain. It is not only suitable for replacing the original engineering structure, but also more suitable for microservice development.

Based on Spring Boot, Spring Cloud provides a complete set of solutions for the architectural problems in the development of the microservice system-service registration and discovery, service consumption, service protection and fuse, gateway, distributed call tracking, distributed configuration management, etc. .

Spring Cloud is a cloud application development tool based on Spring Boot; Spring boot focuses on individual individuals that are quickly and easily integrated, and Spring Cloud is a service governance framework that focuses on the overall situation; spring boot uses the concept of default greater than configuration, and many integration solutions have I'll help you choose. If you don't configure it, you can configure it. A large part of Spring Cloud is implemented based on Spring boot.

How to implement distributed locks (zookeeper, redis, database)

1. Distributed lock based on database

Distributed lock based on table

CREATE TABLE `methodLock` (

`id` int(11) NOT NULL AUTO_INCREMENT COMMENT'primary key',  

`method_name` varchar(64) NOT NULL DEFAULT'' COMMENT'Locked method name',

`desc` varchar(1024) NOT NULL DEFAULT'Remarks information',  

`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT'Save data time, automatically generated',  

PRIMARY KEY (`id`),  

UNIQUE KEY `uidx_method_name` (`method_name `) USING BTREE) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Method in lock';

When we want to lock a method, execute the following SQL: 
insert into methodLock(method_name,desc) values ('method_name','desc') 
Because we have made a unique constraint on method_name, if there are multiple requests at the same time If submitted to the database, the database will ensure that only one operation can succeed, then we can assume that the thread that succeeded in the operation has obtained the lock of the method and can execute the method body content.

When the method is executed, if you want to release the lock, you need to execute the following Sql: 
delete from methodLock where method_name ='method_name'

The above simple implementation has the following problems:

This lock strongly depends on the availability of the database. The database is a single point. Once the database goes down, the business system will become unavailable.

This lock has no expiration time. Once the unlock operation fails, the lock record will remain in the database, and other threads can no longer obtain the lock.

This lock can only be non-blocking, because the insert operation of the data will directly report an error once the insert fails. Threads that have not acquired the lock will not enter the queue. If they want to acquire the lock again, they must trigger the acquisition operation again.

This lock is non-reentrant, and the same thread cannot acquire the lock again before releasing the lock. Because the data in the data already exists.

This lock is an unfair lock, and all threads waiting for the lock compete for the lock by luck.

Of course, we can also solve the above problems in other ways.

Is the database single point? Engage in two databases and synchronize the data in both directions. Once hung up, quickly switch to the standby database.

No expiration time? Just do a timed task to clean up the overtime data in the database at regular intervals.

Non-blocking? Engage in a while loop, until the insert is successful and then return to success.

Non-reentrant? Add a field to the database table to record the host information and thread information of the machine currently acquiring the lock, then query the database first when acquiring the lock next time. If the host information and thread information of the current machine can be found in the database, directly Just assign the lock to him.

Unfair? Create an intermediate table to record all the threads waiting for the lock and sort them according to the creation time. Only the first created one is allowed to acquire the lock

Distributed lock based on exclusive lock

In addition to adding and deleting records in the data table, you can actually use the locks in the data to implement distributed locks.

We also use the database table we just created. Distributed locks can be realized through exclusive locks in the database. Based on the InnoDB engine of MySql, you can use the following methods to implement locking operations:

public boolean lock(){    




            result = select * from methodLock where method_name=xxx for update;            


                return true;           


        }catch(Exception e){





    return false;


Add for update after the query statement, and the database will add an exclusive lock to the database table during the query. When an exclusive lock is added to a record, other threads can no longer add an exclusive lock on the row of records.

We can think that the thread that obtains the exclusive lock can obtain the distributed lock. When the lock is obtained, the business logic of the method can be executed. After the method is executed, the following method is used to unlock:

public void unlock(){ connection.commit();}

Release the lock through the connection.commit(); operation.

This method can effectively solve the above-mentioned problems of unable to release locks and blocking locks.

Blocking lock? The for update statement will return immediately after the execution is successful, and will remain in a blocking state when the execution fails until it succeeds.

After the lock, the service is down and cannot be released? In this way, the database will release the lock by itself after the service goes down.

Redis memcached Redis

Redis Jedis.setNX

public boolean trylock(String key) {    

    ResultCode code = jedis.setNX(key, "This is a Lock.");    

    if (ResultCode.SUCCESS.equals(code))        

        return true;    


        return false;


public boolean unlock(String key){

    ldbTairManager.invalid(NAMESPACE, key);



2 redis


4 key redis setNX

5 setNX

redis setExpire


redis A setNX B setNX

redis Salvatore Sanfilippo Redlock DLM

Redlock N redis N=5 N


2 N N key value 10s 5-50ms redis

3 3


Redlock 2 redis

How to do distributed locking Redlock


GC Full GC GC Full GC HBase HBase GC Full GC GC 1 Full GC 2 2 1 Full GC 2

token token version token 1 token token





Zookeeper ZK Session

Zookeeper ZK Zookeeper


Zookeeper ZK

Zookeeper ZK ZK

Zookeeper Redis


Zookeeper Quorum Based Protocol Zookeeper N Zookeeper (N 3 5 ) N/2 + 1 Quorum Based Protocol Zookeeper

Zookeeper N/2+1 N/2 Zookeeper


A Zookeeper/z

A B A/z

B Zookeeper/z

B Zookeeper A B A

B Zookeeper/z

A Zookeeper/z Zookeeper/z Leader B


Leader B Leader

Zookeeper Zookeeper

Zookeeper Zookeeper Zookeeper


D **ubbo **ES

S pring boot dubbo

I k

S pring

T hreadLocal


Hibernate ThreadLocal getSession ThreadLocal session session session ThreadLocal

ThreadLocal session

ThreadLocal ThreadLocal threadSession Thread Thread ThreadLocalMap threadLocals map map ThreadLocal


D ubbo

Failover Cluster

1. ( )


3. retries= 2 ( )


Failfast Cluster

Failsafe Cluster

.Failback Cluster

Forking Cluster


forks= 2

Broadcast Cluster

(2.1.0 )

R edis watch

redis watch multi redis watch+multi