04

04

Basic knowledge

1. Why use concurrent programming (the advantages of concurrent programming)

Make full use of the computing power of the multi-core CPU: The computing power of the multi-core CPU can be maximized through concurrent programming, and the performance can be improved

Facilitate business splitting and improve system concurrency and performance: In special business scenarios, it is inherently suitable for concurrent programming. The current system requires millions or even tens of millions of concurrency at every turn, and multi-threaded concurrent programming is the basis for the development of high-concurrency systems. The use of multi-threading mechanisms can greatly improve the overall concurrency and performance of the system. In the face of complex business models, parallel programs are more suitable for business needs than serial programs, and concurrent programming is more suitable for this business split.

2. What are the disadvantages of concurrent programming

The purpose of concurrent programming is to improve the execution efficiency of the program and increase the running speed of the program, but concurrent programming does not always improve the running speed of the program, and concurrent programming may encounter many problems, such as memory leaks, context switching, Thread safety, deadlock and other issues.

3. What are the three elements of concurrent programming? How to ensure the safe operation of multiple threads in a Java program?

3.elements of concurrent programming (the safety of threads is reflected in):

Atomicity: Atom, that is, an indivisible particle. Atomicity means that one or more operations are either all executed successfully or all executed fail.

Visibility: The modification of a shared variable by one thread can be seen immediately by another thread. (Synchronized, volatile)

Orderliness: The order of program execution is executed in the order of code. (The processor may reorder the instructions)

Reasons for thread safety issues:

Atomicity caused by thread switching

Visibility issues caused by caching

Ordering problems caused by compilation optimization

Solution:

The atomic classes, synchronized, and LOCK at the beginning of JDK Atomic can solve the atomicity problems synchronized, volatile, and LOCK, and can solve the visibility problem. Happens-Before rules can solve the order problem

4. What is the difference between parallel and concurrency?

Concurrency: Multiple tasks are executed on the same CPU core in turns (alternatively) according to subdivided time slices. From a logical point of view, those tasks are executed at the same time. Parallel: Multiple processors or multi-core processors process multiple tasks at the same time per unit of time, which is "simultaneous" in a true sense. Serial: There are n tasks, which are executed sequentially by one thread. Because tasks and methods are executed in one thread, there is no thread insecurity, and there is no critical section problem. Make an image metaphor:

Concurrency = two queues and one coffee machine.

Parallel = two queues and two coffee machines.

Serial = one queue and one coffee machine.

5. What is multithreading, and the advantages and disadvantages of multithreading?

Multi-threading: Multi-threading means that the program contains multiple execution streams, that is, multiple different threads can run at the same time to perform different tasks in a program.

The benefits of multithreading:

Can improve CPU utilization. In a multithreaded program, when a thread must wait, the CPU can run other threads instead of waiting, which greatly improves the efficiency of the program. That is to say, a single program is allowed to create multiple threads that execute in parallel to complete their tasks.

Disadvantages of multithreading:

Threads are also programs, so threads need to occupy memory. The more threads, the more memory;

Multithreading needs to be coordinated and managed, so CPU time is required to track threads;

Access to shared resources between threads will affect each other, and the problem of competing for shared resources must be resolved.

The difference between threads and processes

1. What are threads and processes?

process

An application that runs in memory. Each process has its own independent piece of memory space, and a process can have multiple threads. For example, in a Windows system, a running xx.exe is a process.

Thread

An execution task (control unit) in the process is responsible for the execution of the program in the current process. A process has at least one thread, a process can run multiple threads, and multiple threads can share data.

2. The difference between process and thread

Thread has many characteristics of traditional process, so it is also called light-weight process or process element; and traditional process is called heavy-weight process, which is equivalent to a task with only one thread . In an operating system that introduces threads, usually a process has several threads, including at least one thread.

Fundamental difference: Process is the basic unit of operating system resource allocation, while thread is the basic unit of processor task scheduling and execution

Resource overhead: Each process has independent code and data space (program context), and switching between programs will have a large overhead; threads can be regarded as lightweight processes, and the same type of threads share code and data space. Each thread has its own independent running stack and program counter (PC), and the overhead of switching between threads is small.

Containment relationship: If there are multiple threads in a process, the execution process is not one line, but multiple lines (threads) are completed together; threads are part of the process, so threads are also called lightweight processes or lightweight Level process.

Memory allocation: threads of the same process share the address space and resources of this process, while the address space and resources between processes are independent of each other

Impact relationship: After a process crashes, it will not affect other processes in protected mode, but a thread crashes and the entire process dies. So multi-process is more robust than multi-thread.

Execution process: Each independent process has an entry for program operation, a sequential execution sequence and a program exit. However, threads cannot be executed independently. They must exist in the application program. The application program provides multiple thread execution control, and both of them can be executed concurrently.

3. What is context switching?

In multi-threaded programming, the number of general threads is greater than the number of CPU cores, and a CPU core can only be used by one thread at any time. In order to enable these threads to be executed effectively, the CPU adopts a strategy for each thread The form of allocating time slices and rotating. When the time slice of a thread is used up, it will be in the ready state again for other threads to use. This process is a context switch.

In a nutshell: the current task will save its state before switching to another task after the CPU time slice is executed, so that the state of this task can be loaded again when switching back to this task next time. The process from saving to reloading a task is a context switch.

Context switching is usually computationally intensive. In other words, it requires a considerable amount of processor time. In dozens or hundreds of switches per second, each switch takes nanoseconds. Therefore, context switching means consuming a lot of CPU time for the system. In fact, it may be the most time-consuming operation in the operating system.

Compared with other operating systems (including other Unix-like systems), Linux has many advantages, one of which is that it consumes very little time for context switching and mode switching.

4. What is the difference between a daemon thread and a user thread?

Daemon thread and user thread

User thread: running in the foreground, performing specific tasks, such as the main thread of the program, the child threads connected to the network, etc. are all user threads Daemon thread: running in the background, serving other foreground threads. It can also be said that the daemon thread is the "servant" of the non-daemon thread in the JVM. Once all user threads have finished running, the daemon thread will end the work with the JVM. The thread where the main function is located is a user thread. When the main function is started, many daemon threads, such as garbage collection threads, are also started inside the JVM.

One of the more obvious differences is that the user thread ends and the JVM exits, regardless of whether there is a daemon thread running at this time. The daemon thread will not affect the exit of the JVM.

Precautions:

  • setDaemon(true) must be executed before the start() method, otherwise an IllegalThreadStateException will be thrown
  • The new thread spawned in the daemon thread is also the daemon thread
  • Not all tasks can be assigned to the daemon thread for execution, such as read and write operations or calculation logic
  • Daemon threads cannot rely on the contents of the finally block to ensure that the logic of shutting down or cleaning up resources is executed. Because we also said above that once all user threads have finished running, the daemon thread will finish working with the JVM, so the finally statement block in the daemon thread may not be executed.

5. How to find which thread has the highest CPU utilization on Windows and Linux?

Use task manager on windows to view, and use top tool to view on linux.

  • Find out the pid of the CPU consuming process, execute the top command in the terminal, and then press shift+p to find the pid number with the most powerful CPU usage
  • According to the pid number obtained in the first step above, top -H -p pid. Then press shift+p to find the thread number with the most CPU utilization, such as top -H -p 1328
  • Convert the obtained thread number into hexadecimal, just go to Baidu to convert it
  • Use the jstack tool to print out the process information, jstack pid number>/tmp/t.dat, such as jstack 31365>/tmp/t.dat
  • Edit the/tmp/t.dat file to find the information corresponding to the thread number

6. What is thread deadlock

Baidu Encyclopedia: Deadlock refers to a blocking phenomenon caused by competition for resources or due to communication between two or more processes (threads) in the execution process. If there is no external force, they will not be able to advance. . At this time, the system is said to be in a deadlock state or the system has a deadlock. These processes (threads) that are always waiting for each other are called deadlock processes (threads).

Multiple threads are blocked at the same time, one or all of them are waiting for a certain resource to be released. Because the thread is blocked indefinitely, it is impossible for the program to terminate normally.

As shown in the figure below, thread A holds resource 2 and thread B holds resource 1. They both want to apply for each other's resources at the same time, so these two threads will wait for each other and enter a deadlock state.

Let's use an example to illustrate thread deadlock. The code simulates the deadlock situation in the figure above (the code comes from "The Beauty of Concurrent Programming"):

public class DeadLockDemo { private static Object resource1 = new Object(); //Resource 1 private static Object resource2 = new Object(); //Resource 2 public static void main (String[] args) { new Thread(() -> { synchronized (resource1) { System.out.println(Thread.currentThread() + "get resource1" ); try { Thread.sleep( 1000 ); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread() + "waiting get resource2" ); synchronized (resource2) { System.out.println(Thread.currentThread() + "get resource2" ); } } }, "Thread 1" ).start(); new Thread(() -> { synchronized (resource2) { System.out.println(Thread.currentThread() + "get resource2" ); try { Thread.sleep( 1000 ); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread() + "waiting get resource1" ); synchronized (resource1) { System.out.println(Thread.currentThread() + "get resource1" ); } } }, "Thread 2" ).start(); } } Copy code

Output result

Thread[Thread 1 , 5 , main] GET Resource1 Thread[Thread 2 , 5 , main] GET resource2 Thread[Thread 1 , 5 , main] Waiting GET resource2 Thread [Thread 2 , . 5 , main] Waiting GET Resource1 duplicated code

7. What are the four necessary conditions for forming a deadlock

  • Mutually exclusive condition: thread (process) is exclusive to the allocated resources, that is, a resource can only be occupied by one thread (process) until it is released
  • Request and hold conditions: When a thread (process) is blocked due to a request for occupied resources, it keeps on holding the acquired resources.
  • Non-deprivation conditions: the resources acquired by the thread (process) cannot be forcibly deprived by other threads before they are used up, and the resources are released only after they are used up.
  • Loop waiting condition: When a deadlock occurs, the waiting thread (process) must form a loop (similar to an infinite loop), causing permanent blockage

8. How to avoid thread deadlock

We only need to destroy one of the four conditions that caused the deadlock.

  • Destroy the mutually exclusive condition, we have no way to destroy this condition, because we originally wanted them to be mutually exclusive with locks (critical resources require mutually exclusive access).

  • Destroy request and hold conditions and apply for all resources at once.

  • When the non-deprivation condition is destroyed and the thread occupying part of the resource applies for other resources further, if the application fails, it can actively release the resources it occupied.

  • To destroy the cyclic waiting conditions, prevent by applying for resources in order. Apply for resources in a certain order, and release resources in reverse order. Destroy the loop waiting condition.

9. 4.ways to create threads

  • Inherit the Thread class;
  • Implement the Runnable interface;
  • Implement the Callable interface;
  • Use the Executors tool class to create a thread pool

Inherit the Thread class

step

  • Define a subclass of the Thread class, rewrite the run method, and implement the relevant logic. The run() method is the business logic method to be executed by the thread
  • Create a custom thread subclass object
  • Call the star() method of the subclass instance to start the thread
public class MyThread extends Thread { @Override public void run () { System.out.println(Thread.currentThread().getName() + "The run() method is executing..." ); } } public class TheadTest { public static void main (String[] args) { MyThread myThread = new MyThread(); myThread.start(); System.out.println(Thread.currentThread().getName() + "The main() method execution ends" ); } } Copy code

Implement the Runnable interface

step

  • Define the Runnable interface to implement the class MyRunnable, and override the run() method
  • Create a MyRunnable instance myRunnable, and create a Thead object with myRunnable as the target. The Thread object is the real thread object
  • Call the start() method of the thread object
public class MyRunnable implements Runnable { @Override public void run () { System.out.println(Thread.currentThread().getName() + "run() method is executing..." ); } } public class RunnableTest { public static void main (String[] args) { MyRunnable myRunnable = new MyRunnable(); Thread thread = new Thread(myRunnable); thread.start(); System.out.println(Thread.currentThread().getName() + "main() method execution completed" ); } } Copy code

Implement the Callable interface

step

  • Create a class myCallable that implements the Callable interface
  • Create a FutureTask object with myCallable as a parameter
  • Create a Thread object with FutureTask as a parameter
  • Call the start() method of the thread object
public class MyCallable implements Callable < Integer > { @Override public Integer call () { System.out.println(Thread.currentThread().getName() + "call() method is executing..." ); return 1 ; } } public class CallableTest { public static void main (String[] args) { FutureTask<Integer> futureTask = new FutureTask<Integer>( new MyCallable()); Thread thread = new Thread(futureTask); thread.start(); try { Thread.sleep( 1000 ); System.out.println( "Return result" + futureTask.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "main() method execution completed" ); } } Copy code

Use the Executors tool class to create a thread pool

Executors provides a series of factory methods to create a thread pool, and the returned thread pool implements the ExecutorService interface.

There are mainly newFixedThreadPool, newCachedThreadPool, newSingleThreadExecutor, newScheduledThreadPool. These four thread pools will be introduced in detail later

public class MyRunnable implements Runnable { @Override public void run () { System.out.println(Thread.currentThread().getName() + "run() method is executing..." ); } } public class SingleThreadExecutorTest { public static void main (String[] args) { ExecutorService executorService = Executors.newSingleThreadExecutor(); MyRunnable runnableTest = new MyRunnable(); for ( int i = 0 ; i < 5 ; i++) { executorService.execute(runnableTest); } System.out.println( "Thread task starts to execute" ); executorService.shutdown(); } } Copy code

11. Tell me about the difference between runnable and callable?

Same point

  • Are all interfaces
  • Can write multi-threaded programs
  • Both use Thread.start() to start the thread

Main difference

  • The run method of the Runnable interface has no return value; the call method of the Callable interface has a return value, which is a generic type and can be used with Future and FutureTask to obtain the result of asynchronous execution
  • The run method of the Runnable interface can only throw runtime exceptions, and cannot be captured and processed; the call method of the Callable interface allows exceptions to be thrown, and exception information can be obtained

Note: The Callalbe interface supports returning the execution result. You need to call FutureTask.get() to get it. This method will block the main process from continuing to execute. If you don't call it, it won't be blocked.

12. What is the difference between thread run() and start()?

  • Each thread completes its operation through the method run() corresponding to a specific Thread object. The run() method is called the thread body. Start a thread by calling the start() method of the Thread class.

  • The start() method is used to start the thread, and the run() method is used to execute the runtime code of the thread. run() can be called repeatedly, while start() can only be called once.

  • The start() method starts a thread, which truly realizes multithreaded operation. Calling the start() method does not need to wait for the execution of the run method body code to complete, and can directly continue to execute other codes; at this time, the thread is in the ready state and is not running. Then through this Thread class call the method run() to complete its running state, the run() method ends, and the thread terminates. Then the CPU schedules other threads.

  • The run() method is in this thread, it is just a function in the thread, not multithreaded. If you call run() directly, it is actually equivalent to calling an ordinary function. The run() method must wait for the run() method to complete before executing the following code, so there is still only one execution path, and there is no thread at all. Therefore, the start() method should be used instead of the run() method in multi-threaded execution.

13. What are Callable and Future?

The Callable interface is similar to Runnable, as can be seen from the name, but Runnable does not return results and cannot throw exceptions that return results. Callable is more powerful. After being executed by a thread, it can return a value. This return value can be Obtained by Future, that is to say, Future can get the return value of asynchronous execution task.

The Future interface represents an asynchronous task, which is the result of an asynchronous task that may not be completed yet. So Callable is used to produce results, and Future is used to obtain results.

14. What is FutureTask

FutureTask FutureTask Callable get FutureTask Callable Runnable FutureTask Runnable FutureTask

15.

  • (new)

  • (runnable) start() cpu

  • (running) (runnable) cpu timeslice

  • (block) CPU CPU

( ). wait() JVM (waitting queue)

( ). synchronized ( ) JVM (lock pool)

( ). : sleep() join() I/O sleep() join() I/O

  • Death (dead): The thread's run(), main() method execution ends, or the run() method exits due to an exception, the thread ends its life cycle. A thread that has died cannot be reborn again.

16. What is the thread scheduling algorithm used in Java?

Computers usually have only one CPU, and can only execute one machine instruction at any time. Each thread can execute instructions only when it has the right to use the CPU. The so-called multi-threaded concurrent operation actually means that from a macro point of view, each thread obtains the right to use the CPU in turn, and executes their respective tasks. In the running pool, there will be multiple threads in the ready state waiting for the CPU. One task of the JAVA virtual machine is responsible for thread scheduling. Thread scheduling refers to the allocation of CPU usage rights to multiple threads according to a specific mechanism.

There are two scheduling models: time-sharing scheduling model and preemptive scheduling model.

  • The time-sharing scheduling model means that all threads take turns to obtain the right to use the cpu, and the time slice of the CPU occupied by each thread is evenly distributed. This is also easier to understand.

  • The Java virtual machine adopts a preemptive scheduling model, which means giving priority to threads with high priority in the runnable pool to occupy the CPU. If the threads in the runnable pool have the same priority, then a thread is randomly selected to make it occupy the CPU. The thread in the running state will continue to run until it has to give up the CPU.

17. Thread scheduling strategy

The thread scheduler selects the thread with the highest priority to run, but if the following occurs, it will terminate the thread's running:

(1) The yield method is called in the thread body to give up the right to occupy the cpu

(2) The sleep method is called in the thread body to make the thread go to sleep

(3) Thread is blocked due to IO operation

(4) Another higher priority thread appears

(5) In a system that supports time slices, the thread s time slice runs out

18. What is Thread Scheduler and Time Slicing?

The thread scheduler is an operating system service that is responsible for allocating CPU time for threads in the Runnable state. Once we create a thread and start it, its execution depends on the implementation of the thread scheduler.

Time slicing refers to the process of allocating available CPU time to available Runnable threads. Allocating CPU time can be based on thread priority or thread waiting time.

Thread scheduling is not controlled by the Java virtual machine, so it is a better choice to control it by the application (that is, don't let your program depend on the priority of the thread).

19. Please tell me the methods related to thread synchronization and thread scheduling.

(1) wait(): Make a thread in a waiting (blocking) state, and release the lock of the object it holds;

(2) sleep(): Make a running thread sleep, which is a static method. Calling this method must handle InterruptedException;

(3) notify(): wake up a thread in a waiting state. Of course, when calling this method, it cannot exactly wake up a certain thread in the waiting state, but the JVM determines which thread to wake up, and has nothing to do with priority;

(4) notityAll(): wake up all threads in the waiting state. This method does not give the object's lock to all threads, but allows them to compete. Only the thread that obtains the lock can enter the ready state;

20. What is the difference between sleep() and wait()?

Both can suspend the execution of the thread

  • The difference of the class: sleep() is a static method of the Thread class, and wait() is a method of the Object class.
  • Whether to release the lock: sleep() does not release the lock; wait() releases the lock.
  • Different purposes: Wait is usually used for interaction/communication between threads, and sleep is usually used to suspend execution.
  • The usage is different: after the wait() method is called, the thread will not automatically wake up, and other threads need to call the notify() or notifyAll() method on the same object. After the sleep() method is executed, the thread will automatically wake up. Or you can use wait(long timeout) to automatically wake up the thread after timeout.

21. How do you call the wait() method? Use if blocks or loops? why?

Threads in the waiting state may receive false alarms and false wakeups. If the waiting conditions are not checked in the loop, the program will exit without meeting the end conditions.

The wait() method should be called in a loop, because when the thread gets the CPU to start execution, other conditions may not be met, so it is better to loop to check whether the conditions are met before processing. The following is a standard code that uses wait and notify methods:

synchronized (monitor) { //Determine whether the conditional predicate is satisfied while (!locked) { //Wait for wake-up monitor.wait(); } //handle other business logic } Copy code

22. Why are the methods wait(), notify() and notifyAll() of thread communication defined in the Object class?

1. The Wait-notify mechanism is a communication mechanism between different threads on the premise of acquiring an object lock. In Java, the lock of the object is generally acquired first, and then the wait-notify method of the object is called. Any object can be used as a lock. Due to the arbitrary nature of the lock object, these communication methods should also have any Sex, needs to be defined in the Object class.

2. A thread can have multiple object locks. There is a binding relationship between wait, notify, notifyAll and the object lock. For example, if you call the wait() method with the object lock Object, then you can only pass Object.notify () or Object.notifyAll() to wake up the thread, so that JVM can easily know which object lock waiting pool should be used to wake up the thread, if you use Thread.wait(), Thread.notify(), Thread.notifyAll( ) To call, the virtual machine does not know which object lock needs to be operated. Therefore, any object can become the carrier of message communication, so wait-notify is defined in the common parent class objest of all classes.

23. Why wait(), notify() and notifyAll() must be called in a synchronized method or synchronized block?

(1) Why wait() must be called in a synchronized method/code block?

Answer: Calling wait() is to release the lock. The prerequisite for releasing the lock is that the lock must be acquired first, and then the lock can be released.

(2) Why must notify() and notifyAll() be called in a synchronized method/code block?

Answer: notify(), notifyAll() is to pass the lock to the thread containing the wait() method, and let it continue to execute. If there is no lock, how can the lock be handed over to other threads; (essentially, let it be in the entry queue Thread competition lock)

explain in detail:

I have always remembered that wait(), notify(), notifyAll() taught by the teacher must be used in the Synchronized key, which is puzzled. I have studied it now and finally understand.

First of all, we must understand that each object can be considered as a "monitor", this monitor consists of three parts (an exclusive lock, an entry queue, a waiting queue). Note that an object can only have one exclusive lock, but any thread can own this exclusive lock.

For the asynchronous method of the object, any thread can call the method at any time. (That is, ordinary methods can be called by multiple threads at the same time)

For the synchronization method of an object, only the exclusive lock of the object can call the synchronization method. If this exclusive lock is occupied by other threads, then another thread that calls the synchronization method will be in a blocked state, and this thread enters the entry queue.

If a thread that owns the exclusive lock calls the wait() method of the synchronization method of the object, the thread will release the exclusive lock and join the waiting queue of the object; (why use wait()? Hope that a variable is set before execution , Notify() notification variable has been set.)

A thread calls notify (), notifyAll () method is to transfer the waiting queue threads to the entry queue, and then let them compete for the lock, so the calling thread must own the lock.

What is the function of the yield method in the Thread class? Make the current thread from the execution state (running state) to the executable state (ready state).

The current thread has reached the ready state, then which thread will change from the ready state to the execution state next? It may be the current thread, or it may be another thread, depending on the allocation of the system.

24. Why are the sleep() and yield() methods of the Thread class static?

The sleep() and yield() methods of the Thread class will run on the currently executing thread. So it doesn't make sense to call these methods on other threads that are waiting. This is why these methods are static. They can work in the currently executing thread and avoid the programmer's mistaken belief that these methods can be called in other non-running threads.

If sleep and yield are static methods, then no matter which thread, as long as it is called, it will give itself to sleep and yield.

If sleep and yield are instance methods, it will be lively. A thread can obtain references to other thread objects, and then call the sleep and yield methods of other threads by reference, so that other threads give up CPU usage rights, so that the control of the program will be difficult.

25. What is the difference between thread sleep() method and yield() method?

(1) The sleep() method does not consider the priority of the thread when giving other threads the opportunity to run, so it will give the low-priority thread a chance to run; the yield() method will only give the same priority or higher priority thread Take the opportunity to run;

(2) The thread turns into a blocked state after executing the sleep() method, and turns into a ready state after executing the yield() method;

(3) The sleep() method declaration throws InterruptedException, and the yield() method does not declare any exception;

26. How to stop a running thread?

There are three ways to terminate a running thread in java:

  • Use the exit flag to make the thread exit normally, that is, the thread terminates when the run method is completed.
  • Use the stop method to force termination, but this method is not recommended, because stop, like suspend and resume, is an expired method.
  • Use the interrupt method to interrupt the thread.

27. What is the difference between interrupted and isInterrupted methods in Java?

interrupt() method: used to interrupt the thread. The state of the thread that calls this method will be set to the "interrupted" state.

Note: Thread interruption only sets the interruption status bit of the thread, and does not stop the thread. The user needs to monitor the status of the thread and do the processing. The method to support thread interruption (that is, the method that throws interruptedException after the thread is interrupted) is to monitor the interruption status of the thread. Once the interruption status of the thread is set to "interrupted status", an interruption exception will be thrown.

interrupted: It is a static method to check whether the current interrupt signal is true or false and clear the interrupt signal. If a thread is interrupted, the first call to interrupted will return true, and the second and subsequent calls will return false.

isInterrupted: Check whether the current interrupt signal is true or false

28. How do you wake up a blocked thread in Java?

First of all, the wait() and notify() methods are for objects. Calling the wait() method of any object will cause the thread to block, and the lock of the object will be released at the same time. Correspondingly, call notify() of any object. The method will randomly unblock the thread blocked by the object, but it needs to reacquire the lock of the object until it is successfully acquired.

Secondly, the wait and notify methods must be called in the synchronized block or method, and it is necessary to ensure that the lock object of the synchronized block or method is the same as the object that calls the wait and notify methods, so that the current thread has succeeded before calling wait Acquire the lock of an object, and the current thread will release the previously acquired object lock after the wait is blocked.

29. What is the difference between notify() and notifyAll()?

If the thread calls the wait() method of the object, then the thread will be in the waiting pool of the object, and the threads in the waiting pool will not compete for the lock of the object.

notifyAll() will wake up all threads, notify() will only wake up one thread.

After notifyAll() is called, all threads will be moved from the waiting pool to the lock pool, and then participate in the lock competition. If the competition succeeds, it will continue to execute. If it is unsuccessful, it will stay in the lock pool and wait for the lock to be released before participating in the competition again. And notify() will only wake up one thread, and which thread to wake up is controlled by the virtual machine.

30. How to share data between two threads?

Sharing variables can be achieved between two threads.

Generally speaking, shared variables require that the variable itself be thread-safe, and then when used in a thread, if there is a compound operation on the shared variable, then the thread safety of the compound operation must also be ensured.

How does Java realize the communication and cooperation between multiple threads? Communication and collaboration between threads can be realized by means of interrupts and shared variables

For example, the most classic producer-consumer model: when the queue is full, the producer needs to wait for the queue to have room to continue to put goods into it, and during the waiting period, the producer must release the critical resource (ie the queue) Right of occupation. Because if the producer does not release the right to occupy critical resources, then the consumer will not be able to consume the goods in the queue, and the queue will not have room, so the producer will wait indefinitely. Therefore, under normal circumstances, when the queue is full, the producer will surrender the right to occupy the critical resource and enter the suspended state. Then wait for the consumer to consume the goods, and then the consumer informs the producer that there is room in the queue. Similarly, when the queue is empty, the consumer must also wait, waiting for the producer to notify it that there is a product in the queue. This process of mutual communication is the cooperation between threads.

The two most common ways of thread communication cooperation in Java:

1. The wait()/notify()/notifyAll() of the Object class of the syncrhoized locked thread

2. The await()/signal()/signalAll() of the Condition class of the thread locked by the ReentrantLock class

31. Which is a better choice, synchronization method or synchronization block?

A synchronized block is a better choice, because it does not lock the entire object (of course you can also let it lock the entire object). Synchronization methods will lock the entire object, even if there are multiple unrelated synchronization blocks in this class, which usually causes them to stop execution and need to wait to obtain a lock on this object.

Synchronous blocks must conform to the principle of open calls, and only lock the corresponding objects in the code blocks that need to be locked, so that deadlocks can also be avoided from the side.

Please know a principle: the smaller the scope of synchronization, the better.

32. What is thread synchronization and thread mutual exclusion, and what are the ways to achieve it?

When a thread operates on shared data, it should be made an "atomic operation", that is, before completing the relevant operation, other threads are not allowed to interrupt it, otherwise, it will destroy the integrity of the data, and it will inevitably get The result of wrong processing is thread synchronization.

In multi-threaded applications, consider data synchronization between different threads and prevent deadlocks. When two or more threads are waiting for each other to release resources at the same time, a deadlock between threads will be formed. In order to prevent the occurrence of deadlock, it is necessary to achieve thread safety through synchronization.

Thread mutual exclusion refers to the exclusiveness of shared process system resources when each single thread accesses it. When there are several threads to use a shared resource, only one thread is allowed to use it at any time, and other threads that want to use the resource must wait until the resource occupant releases the resource. Thread mutual exclusion can be regarded as a special kind of thread synchronization.

The synchronization methods between threads can be roughly divided into two categories: user mode and kernel mode. As the name implies, the kernel mode refers to the use of the unity of the system kernel object to synchronize. When using it, you need to switch between the kernel mode and the user mode, and the user mode does not need to switch to the kernel mode, and only completes the operation in the user mode.

The methods in user mode are: atomic operation (for example, a single global variable), critical section. The methods in kernel mode are: events, semaphores, and mutex.

Methods to achieve thread synchronization

Synchronized code method: sychronized keyword modified method

Synchronized code block: code block modified by the sychronized keyword

Use the special variable domain volatile to achieve thread synchronization: The volatile keyword provides a lock-free mechanism for domain variable access

Use reentrant locks to achieve thread synchronization: the reentrantlock class is a lock that can be flushed, mutually exclusive, and implements the lock interface. It has the same basic behavior and semantics as the sychronized method

1. How to ensure the safe operation of multiple threads in a Java program?

Method 1: Use security classes, such as the classes under java.util.concurrent, use the atomic class AtomicInteger

Method 2: Use automatic lock synchronized.

Method 3: Use manual lock Lock.

The Java sample code for manual lock is as follows:

Lock lock = new ReentrantLock(); lock. lock(); try { System. out. println( "Get the lock" ); } catch (Exception e) { //TODO: handle exception } finally { System. out. println( "Release the lock" ); lock. unlock(); } Copy code

2. What is your understanding of thread priority?

Each thread has a priority. Generally speaking, high-priority threads will have priority at runtime, but this depends on the implementation of thread scheduling, which is OS dependent. We can define the priority of threads, but this does not guarantee that high-priority threads will execute before low-priority threads. The thread priority is an int variable (from 1-10), 1 represents the lowest priority, and 10 represents the highest priority.

Java's thread priority scheduling is entrusted to the operating system to handle it, so it is related to the specific operating system priority. Generally, there is no need to set the thread priority if it is not specifically needed.

3. Which thread is calling the construction method and static block of the thread class

This is a very tricky and cunning question. Remember: the construction method and static block of the thread class are called by the thread where the thread class new is located, while the code in the run method is called by the thread itself.

If the above statement confuses you, let me give you an example. Assuming that Thread1 is new in Thread2, and Thread2 is new in main function, then:

(1) The construction method and static block of Thread2 are called by the main thread, and the run() method of Thread2 is called by Thread2 itself.

(2) The construction method and static block of Thread1 are called by Thread2, and the run() method of Thread1 is called by Thread1 itself.

4. How to get a thread dump file in Java? How do you get the thread stack in Java?

The Dump file is a memory image of the process. The execution state of the program can be saved to the dump file through the debugger.

Under Linux, you can use the command kill -3 PID (the process ID of the Java process) to get the dump file of the Java application.

Under Windows, you can press Ctrl + Break to get it. In this way, the JVM will print the thread dump file to the standard output or error file. It may be printed in the console or log file, depending on the configuration of the application.

Concurrency Theory

1.Java memory model

www.jianshu.com/p/d52fea0d6...

2. What is the purpose of garbage collection in Java? When does garbage collection take place?

Garbage collection is performed when there are unreferenced objects or objects beyond the scope in memory.

The purpose of garbage collection is to identify and discard objects that are no longer used by the application to release and reuse resources.

If the object's reference is set to null, will the garbage collector immediately release the memory occupied by the object? No, in the next garbage callback cycle, this object will be recyclable.

In other words, it will not be immediately reclaimed by the garbage collector, but the memory occupied by it will be released during the next garbage collection.

3. When is the finalize() method called? What is the purpose of finalization?

1) When the garbage collector decides to recycle an object, it will run the finalize() method of the object; finalize is a method of the Object class, and the declaration of this method in the Object class protected void finalize() throws Throwable { } The finalize() method of the reclaimed object will be called when the garbage collector is executed. This method can be overridden to realize the recycling of its resources. Note: Once the garbage collector is ready to release the memory occupied by the object, the finalize() method of the object will be called first, and the memory space occupied by the object will be actually recovered when the next garbage collection action occurs.

2) GC is already memory recovery. What else does the application need to do in finalization? The answer is that most of the time, you don't need to do anything (that is, you don't need to reload). Only in some very special cases, for example, you call some native methods (usually written in C), you can call the C release function in finaliztion.

4. Why does the code reorder?

When executing programs, in order to provide performance, processors and compilers often reorder instructions, but they cannot be reordered at will. It is not how you want to order. It needs to meet the following two conditions:

  • Can not change the results of the program running in a single-threaded environment;
  • Reordering is not allowed if there is a data dependency

It should be noted that reordering will not affect the execution result of a single-threaded environment, but it will destroy the execution semantics of multi-threaded.

5. The difference between as-if-serial rule and happens-before rule

The as-if-serial semantics guarantees that the execution result of a single-threaded program will not be changed, and the happens-before relationship guarantees that the execution result of a correctly synchronized multi-threaded program will not be changed.

The as-if-serial semantics creates an illusion for programmers who write single-threaded programs: single-threaded programs are executed in the order of the program. The happens-before relationship creates an illusion for programmers who write correctly synchronized multithreaded programs: correctly synchronized multithreaded programs are executed in the order specified by happens-before.

The purpose of as-if-serial semantics and happens-before is to increase the parallelism of program execution as much as possible without changing the result of program execution.

Concurrent keywords

synchronized

1. What is the role of synchronized?

In Java, the synchronized keyword is used to control thread synchronization, that is, in a multithreaded environment, to control the synchronized code segment from being executed by multiple threads at the same time. synchronized can modify classes, methods, and variables.

In addition, in the early versions of Java, synchronized is a heavyweight lock, which is inefficient, because synchronized depends on the monitor lock (monitor) is dependent on the underlying operating system Mutex Lock to achieve, Java threads are mapped to the operating system On the native thread. If you want to suspend or wake up a thread, you need the help of the operating system, and the operating system needs to switch from the user mode to the kernel mode when switching between threads. The transition between this state requires a relatively long time and time cost. Relatively high, which is why the early synchronized is inefficient. Fortunately, after Java 6 Java officially optimized synchronized from the JVM level, so the current synchronized lock efficiency is also optimized very well. JDK 1.6 introduces a lot of optimizations to the implementation of locks, such as spin locks, adaptive spin locks, lock elimination, lock coarsening, biased locks, lightweight locks and other technologies to reduce the overhead of lock operations.

2. The three main ways to use the synchronized keyword:

  • Modification instance method: Act on the current object instance to lock, obtain the current object instance lock before entering the synchronization code
  • Modification of static methods: that is, lock the current class, which will affect all object instances of the class, because static members do not belong to any instance object, they are class members (static indicates that this is a static resource of the class, no matter how much new Objects, only one copy). So if a thread A calls a non-static synchronized method of an instance object, and thread B needs to call the static synchronized method of the class to which the instance object belongs, it is allowed and mutual exclusion will not occur, because the lock occupied by the static synchronized method is The lock of the current class, and the lock occupied by access to the non-static synchronized method is the lock of the current instance object.
  • Modified code block: Specify the lock object, lock the given object, and obtain the lock of the given object before entering the synchronization code base.

synchronized static synchronized(class) Class synchronized synchronized(String a) JVM

public class Singleton { private volatile static Singleton uniqueInstance; private Singleton() { } public static Singleton getUniqueInstance () { //First judge whether the object has been instantiated, and enter the lock code if (uniqueInstance == null ) { //Class object lock synchronized (Singleton.class) { if (uniqueInstance == null ) { uniqueInstance = new Singleton(); } } } return uniqueInstance; } } Copy code

In addition, it is necessary to note that uniqueInstance is modified by the volatile keyword.

It is also necessary to modify uniqueInstance with the volatile keyword.

uniqueInstance = new Singleton()
; This code is actually executed in three steps:

Allocate memory space for uniqueInstance Initialize uniqueInstance Point uniqueInstance to the allocated memory address Copy code

However, due to the nature of instruction rearrangement of the JVM, the execution order may become 1->3->2. There is no problem with instruction rearrangement in a single-threaded environment, but in a multi-threaded environment it will cause a thread to obtain an instance that has not yet been initialized. For example, thread T1 executes 1 and 3, and T2 finds that uniqueInstance is not empty after calling getUniqueInstance(), so it returns uniqueInstance, but uniqueInstance has not been initialized yet.

Using volatile can prohibit the reordering of JVM instructions and ensure normal operation in a multi-threaded environment.

4. Tell me about the underlying implementation principle of synchronized?

Synchronized is a keyword in Java, and the process of locking and unlocking is not shown in the process of using it. Therefore, it is necessary to view the corresponding bytecode file through the javap command.

The case of synchronized synchronized statement block

public class SynchronizedDemo { public void method () { synchronized ( this ) { System.out.println( "synchronized code block" ); } } } Copy code

Through the JDK disassembly instruction javap -c -v SynchronizedDemo.class

It can be seen that there is a monitor word before and after the execution of the synchronization code block. The first is monitorenter, and the latter is leaving monitorexit. It is not difficult to imagine that a thread also executes the synchronization code block. The lock is first acquired, and the process of acquiring the lock is Monitorenter, after executing the code block, release the lock. To release the lock is to execute the monitorexit instruction.

  • Synchronization code block: As can be seen from the above bytecode, the implementation of the synchronization statement block uses the monitorenter and monitorexit instructions. The monitorenter points to the beginning of the synchronized code block, and the monitorexit indicates the end position of the synchronized code block. When the execution reaches the monitorenter At this time, the current thread will try to obtain the holding right of the monitor corresponding to the object lock.

  • Synchronization method: Judging from the bytecode of the syncTask() method of the above code, there is no monitorenter and monitorexit, and the bytecode is shorter. In fact, the synchronization of this method is implicit and does not need to be controlled by bytecode instructions. , Through the "ACC_SYNCHRONIZED" access flag, used to distinguish whether a method is a synchronous method. When the method is called, the calling instruction will check whether ACC_SYNCHRONIZED is set. If it is set, the current thread will hold the monitor, then execute the method, and finally release the monitor regardless of whether the method is completed normally.

Why are there two monitorexit?

This is mainly to prevent the thread from exiting abnormally in the synchronized code block, and the lock is not released, which will inevitably cause a deadlock (the waiting thread will never obtain the lock). Therefore, the last monitorexit is to ensure that the lock can be released under abnormal conditions to avoid deadlock. To lock the method, there is only a flag such as ACC_SYNCHRONIZED, which indicates that when the thread enters the method, it needs a monitorenter, and when it exits the method, it needs monitorexit.

5. The principle of synchronized reentrancy

A reentrant lock means that after a thread acquires the lock, the thread can continue to acquire the lock. The underlying principle maintains a counter. When a thread acquires the lock, the counter increases by one, and continues to increase by one when the lock is acquired again. When the lock is released, the counter decreases by one. When the counter value is 0, it indicates that the lock is not held by any thread. Yes, other threads can compete to acquire the lock.

6. What is spin

Many codes in synchronized are just simple codes, and the execution time is very fast. At this time, all the waiting threads are locked. It may be a not worthwhile operation, because thread blocking involves the problem of switching between user mode and kernel mode. Since the code in synchronized executes very fast, let the thread waiting for the lock not be blocked, but do a busy loop on the boundary of synchronized, which is spin. If you do multiple loops and find that the lock has not been obtained, block again. This may be a better strategy.

7. What is the principle of synchronize lock escalation in multithreading?

  • 1.Only one thread accesses the object, and there is no competition from other threads, so there is no need to lock the object, which is a lock-free state.
  • 2. When a thread executes to the synchronized code block for the first time, first check whether the lock object is in an unlocked state. If it is unlocked, modify the lock flag in the object header and save the current thread's id in In mark word, the lock state becomes biased lock,
  • 3. If this is a second thread accessing the code fast, it will first check whether the mark word thread id in the lock pair lock object is itself. If it is yourself, it will directly synchronize the code. If it is not, modify the mark through the cas operation If the modification is successful, it means that a biased lock is obtained. If the modification is unsuccessful, it means that there is lock competition, and the lock is upgraded to a lightweight lock.
  • 4. JVM will first create a space called Lock Record in the stack frame of the current thread, which is used to store a copy of the current Mark Word of the lock object, and use the CAS operation to try to update the Mark Word of the object to point to Lock Record pointer, if it succeeds, it means that the lock is contended. If it fails, it will be retried through cas. If cas retries a thread that reaches the maximum number of spins, the lock competition is serious and the lightweight lock will be upgraded to heavyweight lock.

The purpose of the lock upgrade: the lock upgrade is to reduce the performance consumption caused by the lock. After Java 6, the implementation of synchronized is optimized, and the method of upgrading the biased lock to a lightweight lock and then to a heavyweight lock is used, thereby reducing the performance consumption caused by the lock.

8. How does thread B know that thread A has modified the variable?

(1) volatile modified variable

(2) The method of synchronized modification and modification of variables

(3) wait/notify

9. After a thread enters the synchronized method A of an object, can other threads enter the synchronized method B of this object?

Can't. Other threads can only access the non-synchronized method of the object, and the synchronized method cannot enter. Because the synchronized modifier on the non-static method requires that the lock of the object be acquired when the method is executed, if it has entered the A method to indicate that the object lock has been taken away, then the thread trying to enter the B method can only wait for the lock pool (note that it is not waiting The lock of the waiting object in the pool.

Comparison of synchronized, volatile, and CAS (1) Synchronized is a pessimistic lock, which is preemptive and will cause other threads to block.

(2) Volatile provides visibility of multi-threaded shared variables and prohibits instruction reordering optimization.

(3) CAS is an optimistic lock based on conflict detection (non-blocking)

10. What is the difference between synchronized and Lock(ReentrantLock)?

  • First of all, synchronized is a Java built-in keyword; Lock is a Java class, so it provides more and more flexible methods than synchronized, such as reentrant, interruptible, time-limited, fair lock, etc. www.jianshu.com/p/155260c8a
  • Synchronized can lock classes, methods, and code blocks; while lock can only lock code blocks.
  • Synchronized does not need to manually acquire and release the lock, it is easy to use, it will automatically release the lock when an exception occurs, and will not cause deadlock; while the lock needs to be locked and released by itself, if it is used improperly without unLock() to release the lock, it will cause death lock.
  • The implementation classes of Lock basically support unfair locks (default) and fair locks. Synchronized only supports unfair locks. Of course, in most cases, unfair locks are an efficient choice.

Similarities: Both are reentrant locks

Both are reentrant locks. The concept of "reentrant lock" is that you can acquire your own internal lock again. For example, a thread has acquired a lock on an object, and the object lock has not been released yet. It can still be acquired when it wants to acquire the lock of this object again. If it is unlockable and reentrant, it will cause deadlock. Each time the same thread acquires a lock, the lock counter is incremented by one, so the lock cannot be released until the lock counter drops to 0.

volatile

1. The role of the volatile keyword

For visibility, Java provides the volatile keyword to ensure visibility and prohibit instruction rearrangement. Volatile provides a happens-before guarantee, ensuring that the modification of one thread is visible to other threads. When a shared variable is modified by volatile, it will ensure that the modified value will be updated to the main memory immediately. When other threads need to read it, it will go to the memory to read the new value.

From a practical point of view, an important role of volatile is to combine with CAS to ensure atomicity. For details, see the classes under the java.util.concurrent.atomic package, such as AtomicInteger.

2. Can a volatile array be created in Java?

Yes, an array of volatile type can be created in Java, but it is just a reference to the array, not the entire array. This means that if you change the array pointed to by the reference, it will be protected by volatile, but if multiple threads change the elements of the array at the same time, the volatile identifier cannot play the role of protection before.

3. What is the difference between volatile variables and atomic variables?

Volatile variables can ensure the precedence relationship, that is, write operations will occur before subsequent read operations, but it does not guarantee atomicity. For example, if you modify the count variable with volatile, then the count++ operation is not atomic.

The atomic method provided by the AtomicInteger class can make this operation atomic. For example, the getAndIncrement() method atomically performs an incremental operation to increase the current value by one, and other data types and reference variables can also perform similar operations.

4. Can volatile make a non-atomic operation into an atomic operation?

The main function of the keyword volatile is to make variables visible among multiple threads, but atomicity cannot be guaranteed. For multiple threads to access the same instance variable, locks are needed for synchronization.

Although volatile can only guarantee visibility but not atomicity, modifying long and double with volatile can guarantee the atomicity of its operations.

So you can see from the Oracle Java Spec:

For 64-bit longs and doubles, if they are not modified by volatile, their operations may not be atomic. When operating, it can be divided into two steps, each operating on 32 bits. If you use volatile to modify long and double, the read and write are all atomic operations. The read and write of 64-bit reference addresses are all atomic operations. When implementing JVM, you can freely choose whether to read and write long and double as atomic operations. Recommend JVM Implemented as atomic operations

5. What is the difference between synchronized and volatile?

synchronized means that only one thread can acquire the lock of the acting object, execute code, and block other threads.

Volatile means that the variable is uncertain in the registers of the CPU and must be read from the main memory. Ensure the visibility of variables in a multi-threaded environment; prohibit instruction reordering.

the difference

  • Volatile is a variable modifier; synchronized can modify classes and methods.
  • Volatile can only realize the visibility of the modification of the variable, and cannot guarantee the atomicity; while the synchronized can guarantee the visibility and atomicity of the modification of the variable.
  • Volatile will not cause thread blocking; synchronized may cause thread blocking.
  • Variables marked with volatile will not be optimized by the compiler; variables marked with synchronized can be optimized by the compiler.

The volatile keyword is a lightweight implementation of thread synchronization, so the volatile performance is definitely better than the synchronized keyword. But the volatile keyword can only be used for variables and the synchronized keyword can modify methods and code blocks. The synchronized keyword has been implemented after JavaSE1.6, which mainly includes the introduction of biased locks and lightweight locks and other various optimizations in order to reduce the performance consumption caused by acquiring and releasing locks. The execution efficiency has been significantly improved, and the actual development is in progress. There are still more scenarios for using the synchronized keyword.

Lock system

Introduction to Lock and first acquaintance What is the Lock interface in the AQS Java Concurrency API? What are its advantages over synchronization? The Lock interface provides more scalable lock operations than synchronization methods and synchronization blocks. They allow more flexible structures, can have completely different properties, and can support multiple related types of conditional objects.

Its advantages are:

(1) It can make the lock more fair

(2) The thread can respond to the interrupt while waiting for the lock

(3) You can let the thread try to acquire the lock, and return immediately or wait for a period of time when the lock cannot be acquired

(4) Locks can be acquired and released in different ranges and in different orders

On the whole, Lock is an extended version of synchronized. Lock provides unconditional, pollable (tryLock method), timed (tryLock parameter method), interruptible (lockInterruptibly), and multi-conditional queue (newCondition method). Lock operation.

1. The understanding of optimistic locking and pessimistic locking and how to implement them, and what are the ways to implement them?

Pessimistic lock: always assume the worst case, every time you get the data, you think that others will modify it, so every time you get the data, you will lock it, so that others want to get the data will block until it gets the lock . For example, the implementation of the synchronized keyword of the synchronization primitive in Java is also a pessimistic lock.

Optimistic lock: As the name suggests, it is very optimistic. Every time I get the data, I think that others will not modify it, so it will not be locked, but when updating, it will be judged whether someone else has updated the data during this period, and it can be used. Mechanisms such as version number. Optimistic locking is suitable for multi-read application types, which can improve throughput. The atomic variable class under the java.util.concurrent.atomic package in Java is implemented using CAS, an implementation method of optimistic locking.

2. The realization of optimistic locking:

1. Use the version identifier to determine whether the data read is consistent with the data at the time of submission. After submitting, modify the version identification. In case of inconsistency, the strategy of discarding and retrying can be adopted.

2. Compare and Swap in Java is CAS. When multiple threads try to use CAS to update the same variable at the same time, only one thread can update the value of the variable, and other threads fail, and the failed thread will not be suspended. , But was told that he failed in this competition and could try again. The CAS operation contains three operands-the memory location to be read and written (V), the expected original value for comparison (A), and the new value to be written (B). If the value of the memory location V matches the expected original value A, the processor will automatically update the location value to the new value B. Otherwise, the processor does nothing.

3. What is CAS

CAS is the abbreviation of compare and swap, which is what we call compare exchange.

cas is a lock-based operation, and it is an optimistic lock. Locks are divided into optimistic locks and pessimistic locks in java. Pessimistic lock is to lock the resource, and the next thread can only access it after a thread that previously acquired the lock releases the lock. Optimistic locking adopts a broad attitude, processing resources in some way without locking, for example, by adding version to the record to obtain data, the performance is greatly improved compared with pessimistic locking.

CAS operation consists of three operands-memory location (V), expected original value (A) and new value (B). If the value in the memory address is the same as the value of A, then the value in the memory is updated to B. CAS obtains data through an infinite loop. If in the first round of the loop, the value in the address obtained by the thread a is modified by the thread b, then the thread a needs to spin, and it may not have a chance to execute until the next loop.

Most of the classes under the java.util.concurrent.atomic package are implemented using CAS operations (AtomicInteger, AtomicBoolean, AtomicLong).

What are the problems with CAS? 1. ABA problem:

For example, a thread one fetches A from memory location V. At this time, another thread two also fetches A from memory, and two performs some operations to become B, and then two changes the data at V location to A. This At that time, thread one performs CAS operation and finds that there is still A in the memory, and then the one operation succeeds. Although the CAS operation of thread one is successful, there may be hidden problems. Starting from Java 1.5, the atomic package of the JDK provides a class AtomicStampedReference to solve the ABA problem.

2. Long cycle time and high cost:

In the case of serious resource competition (serious thread conflict), the probability of CAS spinning is relatively large, which wastes more CPU resources and is less efficient than synchronized.

3. Only the atomic operation of a shared variable can be guaranteed:

When performing operations on a shared variable, we can use cyclic CAS to ensure atomic operations, but when operating on multiple shared variables, cyclic CAS cannot guarantee the atomicity of the operation, and locks can be used at this time.

4. What is a deadlock?

When thread A holds the exclusive lock a and tries to acquire the exclusive lock b, while thread B holds the exclusive lock b and tries to acquire the exclusive lock a, it will happen that the two threads AB need to hold each other. The blocking phenomenon that occurs is called a deadlock.

5. The following methods can be used to prevent deadlock:

  • Try to use tryLock (long timeout, TimeUnit unit) methods (ReentrantLock, ReentrantReadWriteLock), set a timeout, and timeout can exit to prevent deadlock.
  • Try to use Java. util. concurrent concurrent class instead of your own handwriting lock.
  • Try to reduce the use granularity of the lock, and try not to use the same lock for several functions.
  • Minimize the number of synchronized code blocks.

6. What is the upgrade principle of multi-threaded locks?

In Java, there are 4 types of lock states, from low to high in order: stateless lock, biased lock, lightweight lock and heavyweight lock state. These states will gradually upgrade with competition. The lock can be upgraded but not downgraded.

AQS (AbstractQueuedSynchronizer) detailed explanation and source code analysis

1. Introduction to AQS

AQS is a framework for building locks and synchronizers. AQS can be used to easily and efficiently construct a large number of synchronizers for a wide range of applications, such as ReentrantLock, Semaphore, and others such as ReentrantReadWriteLock, SynchronousQueue, FutureTask, etc. It is based on AQS. Of course, we can also use AQS to easily construct a synchronizer that meets our own needs.

2. AQS principle overview

The core idea of AQS is that if the requested shared resource is free, the thread currently requesting the resource is set as a valid worker thread, and the shared resource is set to a locked state. If the requested shared resource is occupied, then a mechanism for thread blocking and waiting and lock allocation when awakened is needed. This mechanism AQS is implemented with CLH queue locks, that is, threads that cannot obtain locks temporarily are added to the queue.

The CLH (Craig, Landin, and Hagersten) queue is a virtual two-way queue (a virtual two-way queue means that there is no queue instance, only the relationship between the nodes). AQS encapsulates each thread requesting shared resources into a node (Node) of a CLH lock queue to realize lock allocation.

AQS uses an int member variable to represent the synchronization status, and completes the queuing work of the resource thread through the built-in FIFO queue. AQS uses CAS to perform atomic operations on the synchronization state to modify its value.

private volatile int state; //Shared variables, use volatile modification to ensure thread visibility, state information is operated by protected type getState, setState, compareAndSetState //Return the current value of the synchronization state protected final int getState () { return state; } //Set the value of the synchronization state protected final void setState ( int newState) { state = newState; } //Atomic (CAS operation) set the synchronization state value to the given value update if the current synchronization state value is equal to expect (expected value) protected final boolean compareAndSetState ( int expect, int update) { return unsafe.compareAndSwapInt( this , stateOffset, expect, update); } Copy code

3. How AQS shares resources

AQS defines two resource sharing methods

  • Exclusive: Only one thread can execute, such as ReentrantLock. It can be divided into fair locks and unfair locks:

    • Fair lock: According to the queuing order of threads in the queue, the first-comer gets the lock first
    • Unfair lock: When a thread wants to acquire a lock, it ignores the queue order and directly grabs the lock, whoever grabs it
  • Share: Multiple threads can be executed at the same time, such as Semaphore/CountDownLatch. We will talk about Semaphore, CountDownLatch, CyclicBarrier, ReadWriteLock later.

ReentrantReadWriteLock can be regarded as a combination, because ReentrantReadWriteLock is a read-write lock that allows multiple threads to read a resource at the same time.

Different custom synchronizers compete for shared resources in different ways. The custom synchronizer only needs to realize the acquisition and release of the shared resource state when it is implemented. As for the maintenance of the specific thread waiting queue (such as the failure to obtain the resource into the queue/waking up, etc.), AQS has been implemented at the top.

4. The bottom layer of AQS uses the template method pattern

The design of the synchronizer is based on the template method pattern. If you need to customize the synchronizer, the general way is as follows (the template method pattern is a classic application):

The user inherits AbstractQueuedSynchronizer and rewrites the specified method. (These rewriting methods are very simple, nothing more than the acquisition and release of shared resource state)

Combine AQS in the implementation of a custom synchronization component, and call its template methods, and these template methods will call the methods overwritten by the user.

This is very different from the way we have implemented interfaces in the past. This is a classic application of the template method pattern.

5. AQS uses the template method mode. When customizing the synchronizer, you need to rewrite the following template methods provided by AQS:

isHeldExclusively() //Whether the thread is monopolizing resources. Only use condition to realize it. tryAcquire( int ) //Exclusive mode. Attempt to obtain resources, return true if successful, false if failed. tryRelease( int ) //Exclusive mode. Attempt to release the resource, return true if it succeeds, and false if it fails. tryAcquireShared( int ) //Shared method. Try to obtain resources. Negative number means failure; 0 means success, but there are no remaining available resources; positive number means success and there are remaining resources. tryReleaseShared( int ) //Shared method. Attempt to release the resource. If it succeeds, it returns true, and if it fails, it returns false. Copy code

6, the principle of concurrent containers

By default, every method throws UnsupportedOperationException. The implementation of these methods must be internally thread-safe, and should generally be short rather than blocking. The other methods in the AQS class are final, so they cannot be used by other classes. Only these methods can be used by other classes.

  • ReentrantLock

Take ReentrantLock as an example, the state is initialized to 0, which means the unlocked state. When thread A locks(), tryAcquire() is called to monopolize the lock and state+1. After that, other threads will fail when tryAcquire() again, until the A thread unlock() reaches state=0 (that is, the lock is released), other threads have a chance to acquire the lock. Of course, before the lock is released, the A thread itself can repeatedly acquire the lock (state will accumulate), which is the concept of reentrancy. But pay attention to how many times you need to release, so as to ensure that the state can return to zero.

  • CountDownLatch

Take CountDownLatch as an example. CountDownLatch is based on the use of AQS's sharing mode. The task is divided into N sub-threads to execute, and the state is also initialized to N (note that N must be consistent with the number of threads). These N sub-threads are executed in parallel. After each sub-thread is executed, countDown() once, and the state will be CAS (Compare and Swap) minus 1. After all child threads are executed (that is, state=0), the main calling thread will be unpark(), and then the main calling thread will return from the await() function and continue the rest of the action.

  • CyclicBarrier

Initialize the state, when a thread executes the await method, state-1, when the last thread reaches the fence, state is equal to 0, indicating that all threads are on the fence, ready to pass, call this method to wake up other threads, and at the same time Initialize the "next generation", not the last thread to reach the fence, call the await method of Condition with timeout to wait until the last thread calls await.

  • Semaphore

Each thread needs to call the acquire() method to acquire resources before they can be executed. After execution, they need to release the resources for use by other threads. When creating a Semaphore instance, you need a parameter permits, which can basically be determined to be set to the state of AQS, and then when each thread calls acquire, execute state = state-1, and execute state = state + 1 during release. Of course, when acquiring, if state = 0, it means that there are no resources and you need to wait for other threads to release.

  • ArrayBlockingQueue

The source code of ArrayBlockingQueue, it uses a ReentrantLock and the corresponding two Conditions (notEmpty, notFull) to achieve. If the queue is empty, at this time the thread of the read operation enters the queue of the read thread, waits for the write thread to write a new element, and then wakes up the first waiting thread of the read thread queue. If the queue is full, the thread of the write operation enters the queue of the write thread at this time, waits for the reader thread to remove the queue elements to make room, and then wakes up the first waiting thread of the write thread queue.

Generally speaking, custom synchronizers are either an exclusive method or a shared method, and they only need to implement one of tryAcquire-tryRelease and tryAcquireShared-tryReleaseShared. But AQS also supports custom synchronizers to achieve both exclusive and shared methods at the same time, such as ReentrantReadWriteLock.

7. What is a reentrant lock (ReentrantLock)?

ReentrantLock reentrant lock is a class that implements the Lock interface. It is also a lock that is frequently used in actual programming. It supports reentrancy, which means that the shared resource can be locked repeatedly, that is, the current thread can acquire the lock again. Will be blocked.

The keyword synchronized in java implicitly supports reentrancy. Synchronized realizes reentrancy by gaining and releasing. At the same time, ReentrantLock also supports fair lock and unfair lock. So, if you want to fully understand ReentrantLock, it is mainly the learning of ReentrantLock synchronization semantics: 1. The realization principle of reentrancy; 2. Fair lock and unfair lock.

The realization principle of reentrancy

In order to support reentrancy, two problems must be solved: 1. When a thread acquires a lock, if the thread that has acquired the lock is the current thread, it will be directly acquired again successfully; 2. Since the lock will be acquired n times, then Only after the lock is released the same n times, the lock is considered to be completely released successfully.

ReentrantLock supports two types of locks: fair locks and unfair locks. What is fairness is for acquiring locks. If a lock is fair, then the lock acquisition sequence should conform to the absolute time sequence on the request and satisfy the FIFO.

8. ReentrantReadWriteLock source code analysis

First of all, clarify what ReadWriteLock is. It's not that ReentrantLock is bad, but ReentrantLock has some limitations. If you use ReentrantLock, it may be to prevent data inconsistency caused by thread A writing data and thread B reading data, but in this way, if thread C is reading data, thread D is also reading data, reading data will not change the data, there is no need Locking, but still locking, reducing the performance of the program. Because of this, the ReadWriteLock was born.

ReadWriteLock is a read-write lock interface. Read-write lock is a lock separation technology used to improve the performance of concurrent programs. ReentrantReadWriteLock is a specific implementation of the ReadWriteLock interface, which realizes the separation of read and write. Read locks are shared and write locks are exclusive. , There is no mutual exclusion between reading and reading, but mutual exclusion between reading and writing, writing and reading, and writing and writing, which improves the performance of reading and writing.

The read-write lock has the following three important characteristics:

(1) Fair selectivity: Supports unfair (default) and fair lock acquisition methods, and throughput is still unfair better than fair.

(2) Re-entry: Both read locks and write locks support thread re-entry.

(3) Lock degradation: Following the sequence of acquiring a write lock, acquiring a read lock and then releasing the write lock, a write lock can be degraded to a read lock.

Condition source code analysis and detailed explanation of LockSupport waiting notification mechanism

Concurrent container

ConcurrentHashMap detailed explanation (JDK1.8 version) and source code analysis of concurrent container What is ConcurrentHashMap? ConcurrentHashMap is a thread-safe and efficient HashMap implementation in Java. Usually, if you want to use the map structure with high concurrency, it is the first thing that comes to mind. Compared with hashmap, ConcurrentHashMap is a thread-safe map, which uses the idea of lock segmentation to improve concurrency.

So how does it achieve thread safety?

After JDK1.8, ConcurrentHashMap abandoned the original Segment lock and adopted CAS + synchronized to ensure concurrency security.

What is the concurrency of ConcurrentHashMap in Java? ConcurrentHashMap divides the actual map into several parts to achieve its scalability and thread safety. This division is obtained by using the degree of concurrency, which is an optional parameter of the ConcurrentHashMap class constructor, and the default value is 16, so that contention can be avoided in the case of multithreading.

After JDK8, it abandoned the segment (lock segment) concept, but enabled a new way of implementation, using the array in the hashmap as the lock object, making the lock granularity finer.

1. What is the implementation of concurrent containers?

What is a synchronized container: It can be simply understood as a container that achieves synchronization through synchronized. If multiple threads call the method of the synchronized container, they will be executed serially. Such as Vector, Hashtable, and the containers returned by Collections.synchronizedSet, synchronizedList and other methods. By looking at the implementation code of these synchronized containers such as Vector and Hashtable, you can see that the way for these containers to achieve thread safety is to encapsulate their states and add the keyword synchronized to the methods that need to be synchronized.

Concurrent containers use a completely different locking strategy from synchronized containers to provide higher concurrency and scalability. For example, a more granular locking mechanism is used in ConcurrentHashMap, which can be called segmented lock. Under the lock mechanism, any number of reader threads are allowed to access the map concurrently, and the threads performing the read operation and the thread writing operations can also access the map concurrently. At the same time, a certain number of write operation threads are allowed to modify the map concurrently, so it can Achieve higher throughput in a concurrent environment.

2. What is the difference between synchronous collection and concurrent collection in Java?

Both synchronized collections and concurrent collections provide suitable thread-safe collections for multithreading and concurrency, but concurrent collections are more scalable. Before Java 1.5, programmers had only synchronized collections to use, and it would cause contention when multi-threaded concurrency, hindering the scalability of the system. Java5 introduced concurrent collections like ConcurrentHashMap, which not only provide thread safety, but also improve scalability with modern technologies such as lock separation and internal partitioning.

3. What is the difference between SynchronizedMap and ConcurrentHashMap?

SynchronizedMap locks the entire table at a time to ensure thread safety, so only one thread can visit as a map at a time.

ConcurrentHashMap uses segmented locks to ensure performance under multiple threads.

In ConcurrentHashMap, one bucket is locked at a time. ConcurrentHashMap divides the hash table into 16 buckets by default. Common operations such as get, put, remove, etc. only lock the buckets currently needed.

In this way, originally only one thread can enter, but now 16 writing threads can execute at the same time, the concurrency performance improvement is obvious.

In addition, ConcurrentHashMap uses a different iterative method. In this iterative method, when the collection changes after the iterator is created, ConcurrentModificationException is no longer thrown. Instead, the new data is new when the change is changed so as not to affect the original data. After the iterator is completed, the head pointer is replaced. For new data, so that the iterator thread can use the old data, and the writing thread can also complete the changes concurrently.

4. Detailed explanation of CopyOnWriteArrayList of concurrent containers

What is CopyOnWriteArrayList and what application scenarios can it be used for? What are the advantages and disadvantages? CopyOnWriteArrayList is a concurrent container. Many people call it thread-safe. I think this sentence is not rigorous and lacks a prerequisite, that is, it is thread-safe to operate in non-composite scenarios.

One of the benefits of CopyOnWriteArrayList (lock-free container) is that when multiple iterators traverse and modify this list at the same time, ConcurrentModificationException will not be thrown. In CopyOnWriteArrayList, writing will result in the creation of a copy of the entire underlying array, while the source array will remain in place, so that when the copied array is modified, the read operation can be performed safely.

Use scenarios of CopyOnWriteArrayList

Through source code analysis, we see that its advantages and disadvantages are more obvious, so the usage scenarios are more obvious. It is suitable for the scenario of reading more and writing less.

Disadvantages of CopyOnWriteArrayList

Since the array needs to be copied during the write operation, memory will be consumed. If the content of the original array is large, it may cause young gc or full gc. It cannot be used for real-time reading scenarios, such as copying arrays and adding new elements, it takes time, so after calling a set operation, the data read may still be old. Although CopyOnWriteArrayList can achieve final consistency, it still cannot meet real-time Sexual requirements. In actual use, it may not be possible to guarantee how much data will be placed in CopyOnWriteArrayList. In case the data is a little bit too much, the array must be copied again every add/set, which is too expensive. In high-performance Internet applications, this type of operation causes failures in minutes. The design idea of CopyOnWriteArrayList

Read and write separation, read and write separation, final consistency, use the idea of opening up space to solve concurrency conflicts

Detailed ThreadLocal

1. What is ThreadLocal? What are the usage scenarios?

ThreadLocal is a local thread copy variable tool class. A ThreadLocalMap object is created in each thread. Simply put, ThreadLocal is a way of trading space for time. Each thread can access the value in its own internal ThreadLocalMap object. In this way, resources are avoided from being shared among multiple threads.

Principle: Thread local variables are variables confined to the inside of the thread, belong to the thread itself, and are not shared among multiple threads. Java provides the ThreadLocal class to support thread local variables, which is a way to achieve thread safety. But be especially careful when using thread-local variables in a management environment (such as a web server). In this case, the life cycle of a worker thread is longer than the life cycle of any application variable. Once any thread-local variables are not released after the work is completed, Java applications have the risk of memory leaks.

The classic usage scenario is to allocate a JDBC connection for each thread. In this way, it can be ensured that each thread is performing database operations on its own Connection, and there will be no problems with A thread shutting down the Connection being used by B thread; there are also Session management and other issues.

ThreadLocal usage example:

public class TestThreadLocal { //Thread local storage variable private static final ThreadLocal<Integer> THREAD_LOCAL_NUM = new ThreadLocal<Integer>() { @Override protected Integer initialValue () { return 0 ; } }; public static void main (String[] args) { for ( int i = 0 ; i < 3 ; i++) {//Start three threads Thread t = new Thread() { @Override public void run () { add10ByThreadLocal(); } }; t.start(); } } /** * Thread local storage variable plus 5 */ private static void add10ByThreadLocal () { for ( int i = 0 ; i < 5 ; i++) { Integer n = THREAD_LOCAL_NUM.get(); n += 1 ; THREAD_LOCAL_NUM.set(n); System.out.println(Thread.currentThread().getName() + ": ThreadLocal num=" + n); } } } Copy code

Print result: 3 threads are started, and each thread is printed to "ThreadLocal num=5" at the end, instead of num accumulating until the value is equal to 15

2. What are thread local variables?

Thread local variables are variables confined to the inside of the thread, belong to the thread itself, and are not shared among multiple threads. Java provides the ThreadLocal class to support thread local variables, which is a way to achieve thread safety. But be careful when using thread-local variables in a management environment (such as a web server). In this case, the life cycle of a worker thread is longer than the life cycle of any application variable. Once any thread-local variables are not released after the work is completed, Java applications have the risk of memory leaks.

3. ThreadLocal memory leak analysis and solution

The cause of memory leak caused by ThreadLocal? www.cnblogs.com/Ccwwlx/p/13... The key used in ThreadLocalMap is a weak reference of ThreadLocal, and the value is a strong reference. Therefore, if the ThreadLocal is not strongly referenced externally, the key will be cleaned up during garbage collection, but the value will not be cleaned up. In this way, an Entry with a key of null will appear in the ThreadLocalMap. If we don't take any measures, the value will never be reclaimed by the GC, and a memory leak may occur at this time. This situation has been considered in the implementation of ThreadLocalMap. When the set(), get(), and remove() methods are called, the records whose key is null will be cleaned up. It is best to manually call the remove() method after using the ThreadLocal method

Detailed explanation of BlockingQueue of concurrent containers

1. What is a blocking queue? What is the realization principle of blocking queue? How to use blocking queues to implement the producer-consumer model?

Blocking Queue (BlockingQueue) is a queue that supports two additional operations.

These two additional operations are: when the queue is empty, the thread that gets the element will wait for the queue to become non-empty. When the queue is full, the thread storing the element will wait for the queue to be available.

Blocking queues are often used in producer and consumer scenarios. Producers are threads that add elements to the queue, and consumers are threads that take elements from the queue. The blocking queue is the container in which the producer stores the elements, and the consumer only takes the elements from the container.

JDK7 provides 7 blocking queues. They are:

ArrayBlockingQueue: A bounded blocking queue composed of an array structure.

LinkedBlockingQueue: A bounded blocking queue composed of a linked list structure.

PriorityBlockingQueue: An unbounded blocking queue that supports priority sorting.

DelayQueue: An unbounded blocking queue implemented using priority queues.

SynchronousQueue: A blocking queue that does not store elements.

When implementing synchronous access before Java 5, you can use an ordinary collection, and then use thread collaboration and thread synchronization to achieve producer and consumer models. The main technology is to use it well, wait, notify, notifyAll, sychronized. These keys word. After Java 5, blocking queues can be used to achieve this. This method greatly simplifies the amount of code, makes multi-threaded programming easier, and guarantees safety.

The BlockingQueue interface is a subinterface of Queue. Its main purpose is not as a container, but as a tool for thread synchronization. Therefore, it has an obvious feature. When the producer thread tries to put elements into the BlockingQueue, if the queue has been Full, the thread is blocked. When the consumer thread tries to take out an element from it, if the queue is empty, the thread will be blocked. It is precisely because of this feature that multiple threads in the program alternately send to the BlockingQueue Put elements in, take out elements, it can well control the communication between threads.

Detailed explanation of ArrayBlockingQueue and LinkedBlockingQueue of concurrent containers

Thread Pool

The Executors class creates four common thread pools

1. What is a thread pool? What are the ways to create it?

Pooling technology is not uncommon compared to everyone. Thread pools, database connection pools, Http connection pools, etc. are all applications of this idea. The idea of pooling technology is mainly to reduce the consumption of each acquisition of resources and improve the utilization of resources.

In object-oriented programming, creating and destroying objects is time-consuming, because creating an object requires access to memory resources or more resources. This is especially true in Java, where the virtual machine will try to track each object so that it can be garbage collected after the object is destroyed. Therefore, one way to improve the efficiency of the service program is to reduce the number of creation and destruction of objects as much as possible, especially the creation and destruction of some resource-intensive objects. This is the reason for the "pooled resource" technology.

The thread pool, as its name implies, is to create several executable threads in advance and put them into a pool (container). When you need to get threads from the pool, you don t need to create them. You don t need to destroy the threads but put them back into the pool when you need them. The cost of destroying thread objects. The Executor interface in Java 5+ defines a tool for executing threads. Its subtype, thread pool interface, is ExecutorService. It is more complicated to configure a thread pool, especially when the principle of the thread pool is not very clear. Therefore, some static factory methods are provided on the Executors side of the tool class to generate some commonly used thread pools, as shown below:

(1) newSingleThreadPool: Create a single-threaded thread pool. This thread pool has only one thread working, which is equivalent to a single thread performing all tasks serially. If the only thread ends abnormally, there will be a new thread to replace it. This thread pool guarantees that the execution order of all tasks is executed in the order in which the tasks are submitted.

(2) newFixedThreadPool: Create a fixed-size thread pool. Each time a task is submitted, a thread is created until the thread reaches the maximum size of the thread pool. Once the size of the thread pool reaches the maximum value, it will remain unchanged. If a thread ends due to an abnormal execution, the thread pool will add a new thread. If you want to use the thread pool on the server, it is recommended to use the newFixedThreadPool method to create the thread pool, so that you can get better performance.

(3) newCachedThreadPool: Create a cacheable thread pool. If the size of the thread pool exceeds the threads required to process tasks, some idle threads (without performing tasks for 60 seconds) will be recycled. When the number of tasks increases, the thread pool can intelligently add new threads to process tasks. This thread pool does not limit the size of the thread pool. The size of the thread pool is completely dependent on the maximum thread size that the operating system (or JVM) can create.

(4) newScheduledThreadPool: Create a thread pool of unlimited size. This thread pool supports the needs of timed and periodic execution of tasks.

2. What are the advantages of thread pools?

Reduce resource consumption: reuse existing threads and reduce the overhead of object creation and destruction.

Improve response speed. It can effectively control the maximum number of concurrent threads, improve the utilization rate of system resources, and avoid excessive resource competition and blockage. When the task arrives, the task can be executed immediately without waiting until the thread is created.

Improve the manageability of threads. Threads are scarce resources. If they are created unlimitedly, they will not only consume system resources, but also reduce the stability of the system. The thread pool can be used for uniform allocation, tuning and monitoring.

Additional functions: Provide timing execution, regular execution, single thread, concurrent number control and other functions.

In summary, using the thread pool framework Executor can better manage threads and provide system resource utilization.

4. What are the statuses of the thread pool?

  • RUNNING: This is the most normal state, accepting new tasks and processing tasks in the waiting queue.
  • SHUTDOWN: Do not accept new task submissions, but will continue to process tasks in the waiting queue.
  • STOP: Do not accept new task submissions, no longer process tasks in the waiting queue, and interrupt the threads that are executing tasks.
  • TIDYING: All tasks are destroyed, workCount is 0, and when the state of the thread pool transitions to the TIDYING state, the hook method terminated() will be executed.
  • TERMINATED: After the terminated() method ends, the state of the thread pool will change to this.

5. What is the Executor framework? Why use the Executor framework?

The Executor framework is a framework for invoking, scheduling, executing and controlling asynchronous tasks according to a set of execution strategies.

Creating a thread new Thread() every time a task is performed consumes performance. Creating a thread is time-consuming and resource-consuming, and unlimited thread creation will cause application memory overflow.

So creating a thread pool is a better solution, because the number of threads can be limited and these threads can be recycled and reused. It is very convenient to create a thread pool using the Executors framework.

6. What is the difference between Executor and Executors in Java?

The different methods of the Executors tool class create different thread pools according to our needs to meet business needs.

Executor interface objects can perform our thread tasks.

The ExecutorService interface inherits the Executor interface and has been extended to provide more methods for us to get the status of the task execution and get the return value of the task.

Use ThreadPoolExecutor to create a custom thread pool.

7. What is the difference between the submit() and execute() methods in the thread pool?

  • Receive parameters: execute() can only execute Runnable tasks. submit() can perform Runnable and Callable tasks.

  • Return value: The submit() method can return a Future object holding the calculation result, but execute() does not

  • Exception handling: submit() facilitates Exception handling

8. Detailed explanation of ThreadPoolExecutor of thread pool

The difference between Executors and ThraPoolExecutor to create thread pools The forced thread pool in the "Alibaba Java Development Manual" does not allow the use of Executors to create, but through the ThreadPoolExecutor method. This processing method allows students to write more clearly the running rules of the thread pool and avoid Risk of resource exhaustion

Disadvantages of each method of Executors:

newFixedThreadPool and newSingleThreadExecutor: The main problem is that the piled up request processing queue may consume very large memory or even OOM.

newCachedThreadPool and newScheduledThreadPool: The main problem is that the maximum number of threads is Integer.MAX_VALUE, which may create a very large number of threads, or even OOM.

There is only one way for ThreaPoolExecutor to create a thread pool, which is to use its constructor and specify the parameters by yourself. ThreadPoolExecutor() is the most primitive thread pool creation, and it is also the way to create a thread pool clearly specified in the Alibaba Java Development Manual.

9.Analysis of important parameters of ThreadPoolExecutor constructor

The 3 most important parameters of ThreadPoolExecutor:

  • corePoolSize: The number of core threads, which defines the minimum number of threads that can run at the same time.
  • maximumPoolSize: the maximum number of worker threads allowed in the thread pool
  • workQueue: When a new task comes, it will first determine whether the number of threads currently running has reached the number of core threads, and if so, the task will be stored in the queue.
  • keepAliveTime: When the number of threads in the thread pool is greater than corePoolSize, if no new tasks are submitted at this time, threads outside the core thread will not be destroyed immediately, but will wait until the waiting time exceeds keepAliveTime before being recycled and destroyed;
  • unit: The time unit of the keepAliveTime parameter.
  • threadFactory: Provides a thread factory for the thread pool to create new threads
  • handler: The rejection policy after the thread pool task queue exceeds maxinumPoolSize

10. ThreadPoolExecutor saturation strategy

ThreadPoolExecutor saturation strategy definition:

If the number of threads currently running at the same time reaches the maximum number of threads and the queue is already full, ThreadPoolTaskExecutor defines some strategies:

  • ThreadPoolExecutor.AbortPolicy: Throw a RejectedExecutionException to reject the processing of the new task.
  • ThreadPoolExecutor.CallerRunsPolicy: Call to execute your own thread to run tasks. You will not request for tasks. However, this strategy will reduce the speed of submitting new tasks and affect the overall performance of the program. In addition, this strategy likes to increase queue capacity. If your application can tolerate this delay and you cannot discard any task request, you can choose this strategy.
  • ThreadPoolExecutor.DiscardPolicy: Do not process new tasks, just discard them.
  • ThreadPoolExecutor.DiscardOldestPolicy: This policy will discard the oldest unprocessed task request.

For example: When Spring creates a thread pool through ThreadPoolTaskExecutor or we directly use the constructor of ThreadPoolExecutor, when we do not specify the RejectedExecutionHandler saturation strategy to configure the thread pool, the default is ThreadPoolExecutor.AbortPolicy. By default, ThreadPoolExecutor will throw RejectedExecutionException to reject the new task, which means you will lose the processing of this task. For scalable applications, it is recommended to use ThreadPoolExecutor.CallerRunsPolicy. When the maximum pool is filled, this strategy provides us with a scalable queue. (This can be seen by directly checking the source code of ThreadPoolExecutor's constructor. For simpler reasons, the code will not be posted here)

A simple thread pool Demo: Runnable+ThreadPoolExecutor

11. Thread pool implementation principle

Atomic operation class

1. What is an atomic operation? What are the atomic classes in the Java Concurrency API?

Atomic operation means "an operation or a series of operations that cannot be interrupted". int++ is not an atomic operation, so when one thread reads its value and adds 1, another thread may read the previous value, which will cause an error.

Atomic operations can be achieved in Java by means of locks and loop CAS. CAS operation-Compare & Set, or Compare & Swap, now almost all CPU instructions support the atomic operation of CAS.

In order to solve this problem, it is necessary to ensure that the increase operation is atomic. Before JDK 1.5, we could use synchronization technology to do this. As of JDK1.5, the java.util.concurrent.atomic package provides atomic wrappers of int and long types, which can automatically ensure that their operations are atomic and do not require synchronization.

The java.util.concurrent package provides a set of atomic classes. Its basic feature is that in a multithreaded environment, when multiple threads execute methods contained in instances of these classes at the same time, it is exclusive, that is, when a thread enters a method and executes its instructions, it will not be hit by other threads. While other threads are like spin locks, they wait until the execution of the method is completed before the JVM selects another thread from the waiting queue to enter. This is just a logical understanding.

Atomic classes: AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference

Atomic array: AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray

Atomic attribute updater: AtomicLongFieldUpdater, AtomicIntegerFieldUpdater, AtomicReferenceFieldUpdater

Atomic class to solve the ABA problem: AtomicMarkableReference (by introducing a boolean to reflect whether there has been a change in the middle), AtomicStampedReference (by introducing an int to accumulate to reflect whether there has been a change in the middle)

Tell me about the principle of atomic? The basic characteristic of the classes in the Atomic package is that in a multi-threaded environment, when multiple threads operate on a single variable (including basic types and reference types) at the same time, it is exclusive, that is, when multiple threads simultaneously have the value of the variable When updating, only one thread can succeed, and the unsuccessful thread can continue to try like a spin lock and wait until the execution is successful.

Part of the source code of the AtomicInteger class:

//setup to use Unsafe.compareAndSwapInt for updates (provide a "compare and replace" function during update operations) private static final Unsafe unsafe = Unsafe.getUnsafe(); private static final long valueOffset; static { try { valueOffset = unsafe.objectFieldOffset (AtomicInteger.class.getDeclaredField( "value" )); } catch (Exception ex) { throw new Error(ex);} } private volatile int value; copy code

The AtomicInteger class mainly uses CAS (compare and swap) + volatile and native methods to ensure atomic operations, thereby avoiding the high overhead of synchronized and greatly improving execution efficiency.

The principle of CAS is to compare the expected value with the original value, and update it to a new value if it is the same. The objectFieldOffset() method of the UnSafe class is a local method. This method is used to get the memory address of the "original value", and the return value is valueOffset. In addition, value is a volatile variable, visible in memory, so JVM can guarantee that any thread can always get the latest value of the variable at any time.

Concurrency tools

Concurrency tools of CountDownLatch and CyclicBarrier

1. What is the difference between CycliBarriar and CountdownLatch in Java?

CountDownLatch and CyclicBarrier are both tool classes used to control concurrency. Both can be understood as maintaining a counter, but the two still have different focuses:

  • CountDownLatch is generally used for a thread A to wait for several other threads to perform tasks before it executes; and CyclicBarrier is generally used for a group of threads to wait for each other to a certain state, and then this group of threads execute at the same time;
  • CountDownLatch emphasizes that one thread waits for multiple threads to complete something. CyclicBarrier is that multiple threads wait for each other, wait for everyone to complete, and then work together.
  • After calling the countDown method of CountDownLatch, the current thread will not be blocked, and will continue to execute; and calling the await method of CyclicBarrier will block the current thread until all the threads specified by CyclicBarrier have reached the specified point. carried out;

The CountDownLatch method is relatively small, the operation is relatively simple, and CyclicBarrier provides more methods, such as getNumberWaiting(), isBroken() these methods to obtain the current status of multiple threads, and the construction method of CyclicBarrier can be passed into the barrierAction, specify when all Business functions executed when all threads arrive;

  • CountDownLatch cannot be reused, while CyclicLatch can be reused.

2. Semaphore and Exchanger of concurrency tools

What is the function of Semaphore? Semaphore is a semaphore, and its function is to limit the number of concurrent code blocks. Semaphore has a constructor that can pass in an int type integer n, which means that a certain piece of code can only be accessed by n threads at most. If n is exceeded, please wait until a certain thread finishes executing this code block, and the next thread Re-enter. It can be seen that if the int type integer n=1 passed in the Semaphore constructor, it is equivalent to becoming a synchronized.

Semaphore (semaphore)-allows multiple threads to access at the same time: synchronized and ReentrantLock both allow only one thread to access a resource at a time, Semaphore (semaphore) can specify multiple threads to access a resource at the same time.

What is a tool for exchanging data between threads Exchanger Exchanger is a tool class for inter-thread collaboration, used to exchange data between two threads. It provides a synchronization point for exchange, at which two threads can exchange data. The exchange of data is achieved through the exchange method. If a thread executes the exchange method first, it will wait for another thread to execute the exchange method synchronously. At this time, both threads have reached the synchronization point, and the two threads can exchange data. .

3. What are the commonly used concurrency tools?

Semaphore (semaphore)-allows multiple threads to access at the same time: synchronized and ReentrantLock both allow only one thread to access a resource at a time, Semaphore (semaphore) can specify multiple threads to access a resource at the same time. CountDownLatch (countdown timer): CountDownLatch is a synchronization tool class used to coordinate synchronization between multiple threads. This tool is usually used to control thread waiting, it can make a thread wait until the countdown ends, and then start execution. CyclicBarrier (circular fence): CyclicBarrier is very similar to CountDownLatch, it can also realize the technical waiting between threads, but its function is more complicated and powerful than CountDownLatch. The main application scenarios are similar to CountDownLatch. CyclicBarrier literally means Cyclic Barrier. What it has to do is to block a group of threads when they reach a barrier (also called a synchronization point). The barrier will not open until the last thread reaches the barrier, and all threads intercepted by the barrier will continue to work. The default construction method of CyclicBarrier is CyclicBarrier(int parties). Its parameter indicates the number of threads intercepted by the barrier. Each thread calls the await() method to tell CyclicBarrier that I have reached the barrier, and then the current thread is blocked.