Digging deeper into the Handler mechanism

Digging deeper into the Handler mechanism

## .PriorityQueue Priority Queue Before talking about Handler, let me talk about priority queue. The concrete class presented in Java is PriorityQueue, which implements the Queue interface and extends the Java collection

Both Queue and Deque inherit from Collection, and Deque is a subinterface of Queue. Deque is a double ended queue, I understand it as a double ended queue, a double ended queue, you can insert or delete elements at the beginning and the end. In the explanation of Queue, Queue is a simple FIFO queue. So conceptually, Queue is a FIFO single-ended queue, and Deque is a double-ended queue. PriorityQueue class, the explanation given by Java is as follows: It is based on the borderless priority heap ( the meaning of borderless here refers to the auto-expandable collection length ) priority queue, the arrangement of many elements in the priority queue, or based on Its natural ordering ( that is, the class that implements the Comparable interface, if the inserted class does not implement the interface, it will report a type conversion error ), or it is based on the Comparator offer passed in the constructor, and the source code is as follows:

transient Object[] queue; public boolean offer(E e) { if (e == null) throw new NullPointerException(); modCount++; int i = size; if (i >= queue.length) grow(i + 1); size = i + 1; if (i == 0) queue[0] = e; else siftUp(i, e); return true; } private void siftUp(int k, E x) { if (comparator != null) siftUpUsingComparator(k, x); else siftUpComparable(k, x); } private void siftUpComparable(int k, E x) { Comparable<? super E> key = (Comparable<? super E>) x; while (k> 0) { int parent = (k-1) >>> 1; Object e = queue[parent]; if (key.compareTo((E) e) >= 0) break; queue[k] = e; k = parent; } queue[k] = key; } private void siftUpUsingComparator(int k, E x) { while (k> 0) { int parent = (k-1) >>> 1; Object e = queue[parent]; if (comparator.compare(x, (E) e) >= 0) break; queue[k] = e; k = parent; } queue[k] = x; } Copy code

It is roughly as follows: The priority queue is implemented internally by an array. When inserting an element, the size of the array is first expanded and added by 1, and then the element is inserted into the specified position by the sorting method. The poll pop-up source code is as follows:

public E poll() { if (size == 0) return null; int s = --size; modCount++; E result = (E) queue[0]; E x = (E) queue[s]; queue[s] = null; if (s != 0) siftDown(0, x); return result; } Copy code

Popping is to delete the first element of the array, but it will not be mentioned here that the size of the array is reduced by 1, and the last position of the array is set to null, and all elements of the array 1-size-1 are prepended by 1 bit. The above is the concept of priority queue

## .Handler mechanism

  • Handler: Used to send messages: multiple methods such as sendMessage, and implement the handleMessage() method to handle callbacks (you can also use Message or Handler's Callback for callback processing).
  • Message: The message entity, the message sent is the Message type.
  • MessageQueue: Message queue, used to store messages. When sending a message, the message enters the queue, and then Looper will take out the message from the MessageQueen for processing.
  • Looper: Binding with threads is not limited to the main thread. The bound threads are used to process messages. The loop() method is an endless loop, always taking out messages from MessageQueen for processing.

The Handler mechanism itself is based on the producer and consumer model. The thread that sends the message is the producer, and the thread that processes the message is the consumer (that is, send/post is the production thread, and handleMessage is the consuming thread). The workflow is as follows: Handler has various send methods and post methods, which will eventually execute the enqueueMessage() method in the MessageQueue class. When these messages are executed on the stack, the message itself will have the processing time, which is the when attribute, such as sendMessage(msg The when of the Message of the method is the current time of the system, and the when of the sendMessageDelayed method is the current time of the system plus the delay time; then the message is pushed onto the MessageQueue, which is based on the concept of a priority queue, and will be sorted according to the when attribute of the Message. The message is in the front, and the latest Message is in the back. In the Handler class, a thread of post is actually a message of send, and the callback attribute is the thread

private static Message getPostMessage(Runnable r) { Message m = Message.obtain(); m.callback = r; return m; } Copy code

The source code of the enqueueMessage method in the MessageQueue class is as follows:

boolean enqueueMessage(Message msg, long when) { if (msg.target == null) { throw new IllegalArgumentException("Message must have a target."); } if (msg.isInUse()) { throw new IllegalStateException(msg + "This message is already in use."); } synchronized (this) { if (mQuitting) { IllegalStateException e = new IllegalStateException( msg.target + "sending message to a Handler on a dead thread"); Log.w(TAG, e.getMessage(), e); msg.recycle(); return false; } msg.markInUse(); msg.when = when; Message p = mMessages; boolean needWake; if (p == null || when == 0 || when <p.when) { //New head, wake up the event queue if blocked. msg.next = p; mMessages = msg; needWake = mBlocked; } else { //Inserted within the middle of the queue. Usually we don't have to wake //up the event queue unless there is a barrier at the head of the queue //and the message is the earliest asynchronous message in the queue. needWake = mBlocked && p.target == null && msg.isAsynchronous(); Message prev; for (;;) { prev = p; p = p.next; if (p == null || when <p.when) { break; } if (needWake && p.isAsynchronous()) { needWake = false; } } msg.next = p;//invariant: p == prev.next prev.next = msg; } //We can assume mPtr != 0 because mQuitting is false. if (needWake) { nativeWake(mPtr); } } return true; } Copy code

Here Handler puts Message in MessageQueue, while Looper's loop method is continuously consuming Message from MessageQueue. The source code of loop method in Loop class is as follows: Loop class:

public static void loop() { final MessageQueue queue = me.mQueue; for (;;) { ................. Message msg = queue.next(); ................. try { msg.target.dispatchMessage(msg); dispatchEnd = needEndTime? SystemClock.uptimeMillis(): 0; } finally { if (traceTag != 0) { Trace.traceEnd(traceTag); } } ................. } } Copy code

MessageQueue class:

Message next() { //Return here if the message loop has already quit and been disposed. //This can happen if the application tries to restart a looper after quit //which is not supported. final long ptr = mPtr; if (ptr == 0) { return null; } int pendingIdleHandlerCount = -1;//-1 only during first iteration int nextPollTimeoutMillis = 0; for (;;) { if (nextPollTimeoutMillis != 0) { Binder.flushPendingCommands(); } nativePollOnce(ptr, nextPollTimeoutMillis); synchronized (this) { //Try to retrieve the next message. Return if found. final long now = SystemClock.uptimeMillis(); Message prevMsg = null; Message msg = mMessages; if (msg != null && msg.target == null) { //Stalled by a barrier. Find the next asynchronous message in the queue. do { prevMsg = msg; msg = msg.next; } while (msg != null && !msg.isAsynchronous()); } if (msg != null) { if (now <msg.when) { //Next message is not ready. Set a timeout to wake up when it is ready. nextPollTimeoutMillis = (int) Math.min(msg.when-now, Integer.MAX_VALUE); } else { //Got a message. mBlocked = false; if (prevMsg != null) { prevMsg.next = msg.next; } else { mMessages = msg.next; } msg.next = null; if (DEBUG) Log.v(TAG, "Returning message: "+ msg); msg.markInUse(); return msg; } } else { //No more messages. nextPollTimeoutMillis = -1; } //Process the quit message now that all pending messages have been handled. if (mQuitting) { dispose(); return null; } //If first time idle, then get the number of idlers to run. //Idle handles only run if the queue is empty or if the first message //in the queue (possibly a barrier) is due to be handled in the future. if (pendingIdleHandlerCount <0 && (mMessages == null || now <mMessages.when)) { pendingIdleHandlerCount = mIdleHandlers.size(); } if (pendingIdleHandlerCount <= 0) { //No idle handlers to run. Loop and wait some more. mBlocked = true; continue; } if (mPendingIdleHandlers == null) { mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)]; } mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers); } //Run the idle handlers. //We only ever reach this code block during the first iteration. for (int i = 0; i <pendingIdleHandlerCount; i++) { final IdleHandler idler = mPendingIdleHandlers[i]; mPendingIdleHandlers[i] = null;//release the reference to the handler boolean keep = false; try { keep = idler.queueIdle(); } catch (Throwable t) { Log.wtf(TAG, "IdleHandler threw exception", t); } if (!keep) { synchronized (this) { mIdleHandlers.remove(idler); } } } //Reset the idle handler count to 0 so we do not run them again. pendingIdleHandlerCount = 0; //While calling an idle handler, a new message could have been delivered //so go back and look again for a pending message without waiting. nextPollTimeoutMillis = 0; } } Copy code

It can be seen from the above source code that the loop will constantly ask whether there is a message that can be processed in the MessageQueue. The basis for judging whether the message can be processed is whether the current time of the system is greater than or equal to the when attribute of the Message. If it matches, remove the message from the MessageQueue queue and then return. go back. If it does not, the thread will be blocked and the nativePollOnce method will be called. The blocking time will be nextPollTimeoutMillis; when the loop has a message to be processed, it will be handed over to the Handler to dispatch the message. The set operation is in Looper's prepare method, which will instantiate a Looper and store this object in ThreadLocal (it can be seen that the looper's prepare method only does the thread storage operation of the looper object)

private static void prepare(boolean quitAllowed) { if (sThreadLocal.get() != null) { throw new RuntimeException("Only one Looper may be created per thread"); } sThreadLocal.set(new Looper(quitAllowed)); } public static @Nullable Looper myLooper() { return sThreadLocal.get(); } Copy code

A thread can have N Handlers, but there is only one Looper. How to ensure the uniqueness of Looper is through ThreadLocal, which is very simple to use, as follows

static final ThreadLocal<T> sThreadLocal = new ThreadLocal<T>(); sThreadLocal.set() sThreadLocal.get() Copy code

The source code of set and get is as follows:

public void set(T value) { Thread t = Thread.currentThread(); ThreadLocalMap map = getMap(t); if (map != null) map.set(this, value); else createMap(t, value); } public T get() { Thread t = Thread.currentThread(); ThreadLocalMap map = getMap(t); if (map != null) { ThreadLocalMap.Entry e = map.getEntry(this); if (e != null) { @SuppressWarnings("unchecked") T result = (T)e.value; return result; } } return setInitialValue(); } Copy code

From the source code, the implementation of ThreadLocal itself is very simple. It is a key-value pair Map such as ThreadLocalMap that realizes the ability to store variables in the current thread. According to the set source code, Looper's instance object will be stored in ThreadLocalMap, and its key is the ThreadLocal object itself; and ThreadLocalMap is obtained through the getMap method. The source code of the getMap method is as follows:

ThreadLocal class: ThreadLocalMap getMap(Thread t) { return t.threadLocals; } void createMap(Thread t, T firstValue) { t.threadLocals = new ThreadLocalMap(this, firstValue); } Thread class: public class Thread implements Runnable { ThreadLocal.ThreadLocalMap threadLocals = null; } Copy code

According to the source code of ThreadLocalMap, the threadLocals of the ThreadLocalMap class is unique to a single thread, and looper is obtained based on threadLocal, so Looper is unique to a single thread.

The instantiated Handlers we usually use are based on the main thread. How do the child threads instantiate the Handler? The points to note are as follows:

  • Looper.prepare (Instantiate Looper and store it in ThreadLocal)
  • Looper.loop (start for loop)
  • Looper.quit (Stop the for loop of the loop, release the thread, and release the memory)
public class HandlerDemoThread extends Thread { private Looper mLooper; @Override public void run() { Looper.prepare(); synchronized (this) { mLooper = Looper.myLooper(); } Looper.loop(); } public Looper getLooper() throws Exception { if (!isAlive()) { throw new Exception("current thread is not alive"); } if (mLooper == null) { throw new Exception("current thread is not start"); } return mLooper; } } class MainActivity: AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val handlerThread = HandlerDemoThread() handlerThread.start() val handler1 = object: Handler(handlerThread.getLooper()) { override fun handleMessage(msg: Message?) { super.handleMessage(msg) } } handler1.sendMessage(Message.obtain()) } override fun onDestroy() { super.onDestroy() handler1.looper.quit() } } Copy code

According to the source code of the Handler system mentioned above, before the handler is initialized, it is necessary to ensure that the current thread is prepared, that is, the Looper is instantiated and stored in ThreadLocal. Also start Looper's loop method to make the conveyor belt run. One thing to note here, because the send##,pos## method may be executed in different threads, when sending a delay message, because the synchronization lock will be added when enqueueMessage, this will cause the delay to calculate the time to be incorrect Accurate; here everyone should also note that when the Handler of the main thread is usually instantiated, there is no need to call the prepare and loop methods of the current main thread. This is because the ActivityThread has already started the Looper of the main thread.

ActivityThread is what we often call the main thread or UI thread. The main method of ActivityThread is the entrance of the entire APP. The source code of the main method of ActivityThread is as follows:

public static void main(String[] args) { ... Looper.prepareMainLooper(); ActivityThread thread = new ActivityThread(); //In the attach method, the initialization of the Application object will be completed, and then the onCreate() method of the Application will be called thread.attach(false); if (sMainThreadHandler == null) { sMainThreadHandler = thread.getHandler(); } ... Looper.loop(); throw new RuntimeException("Main thread loop unexpectedly exited"); } Copy code

Why the main thread does not need to call the quit method? Of course, the logic is not working. If the main thread is closed, the app is closed. And in the quit method, that is, the quit method of MessageQueue is finally called, which has been verified. The source code is as follows:

void quit(boolean safe) { if (!mQuitAllowed) { throw new IllegalStateException("Main thread not allowed to quit."); } synchronized (this) { if (mQuitting) { return; } mQuitting = true; if (safe) { removeAllFutureMessagesLocked(); } else { removeAllMessagesLocked(); } //We can assume mPtr != 0 because mQuitting was previously false. nativeWake(mPtr); } } private void removeAllMessagesLocked() { Message p = mMessages; while (p != null) { Message n = p.next; p.recycleUnchecked(); p = n; } mMessages = null; } private void removeAllFutureMessagesLocked() { final long now = SystemClock.uptimeMillis(); Message p = mMessages; if (p != null) { if (p.when> now) { removeAllMessagesLocked(); } else { Message n; for (;;) { n = p.next; if (n == null) { return; } if (n.when> now) { break; } p = n; } p.next = null; do { p = n; n = p.next; p.recycleUnchecked(); } while (n != null); } } } Message class: void recycleUnchecked() { //Mark the message as in use while it remains in the recycled object pool. //Clear out all other details. flags = FLAG_IN_USE; what = 0; arg1 = 0; arg2 = 0; obj = null; replyTo = null; sendingUid = -1; when = 0; target = null; callback = null; data = null; synchronized (sPoolSync) { if (sPoolSize <MAX_POOL_SIZE) { next = sPool; sPool = this; sPoolSize++; } } } Copy code

The quit method of MessageQueue, the source code can be known to release all Messages in the message queue, and release the internal parameters of the Message, that is, release the memory. At the same time, set the global variable mQuitting to true, and then wake up the thread through nativeWake. Because the loop in Looper keeps calling the next method of the MessageQueue class, the source code of the next method is as follows:

................. for (;;) { ................. nativePollOnce(ptr, nextPollTimeoutMillis); synchronized (this) { ................. if (mQuitting) { dispose(); return null; } ................. } } Copy code

Because the quit of MessageQueue has awakened the thread, the next will go down. When mQuitting is true, it changes back to dialose and returns null at the same time. After the loop gets null, it changes back to the direct return loop method. The loop method will end, and the entire thread will have to be released. So back to the quit method, so quit can release both the memory and the thread;

In the child thread, if the Looper is manually created for it, then the quit method should be called to terminate the message loop after all things are completed, otherwise the child thread will always be in a waiting (blocking) state, and if the Looper is exited, this The thread will terminate immediately (execute all methods and), so it is recommended to terminate Looper when it is not needed. Call the quit method

One thing to note here is that in the recycleUnchecked method, there is such a piece of code:

synchronized (sPoolSync) { if (sPoolSize <MAX_POOL_SIZE) { next = sPool; sPool = this; sPoolSize++; } Copy code

The logic here is: internally emptied the message from the end of the queue, and externally remove the associated Message, and add it to the message pool. The message pool is the memory pool where users create Messages. How to create messages in Handler? In order to prevent frequent instantiation of Message, which will cause memory jitter, memory sharing is used here, that is, the concept of memory reuse. That is, call the obtain method in Message to obtain the Message instance. The source code of the obtain method is as follows:

public static Message obtain() { synchronized (sPoolSync) { if (sPool != null) { Message m = sPool; sPool = m.next; m.next = null; m.flags = 0;//clear in-use flag sPoolSize--; return m; } } return new Message(); } Copy code

####Thinking: Why doesn't Handler block the main thread? It can be seen from the above analysis that when the Handler is processing a message, the current thread that has fallen into the stack will sleep for the message at the head of the queue that has not reached the processing time, and will not process the message at the head of the queue until the end of the sleep time. Since the thread will sleep, why doesn't the Handler block the main thread?

A thread is a piece of executable code. When the executable code is executed, the life cycle of the thread should be terminated and the thread exits. As for the main thread, we never hope that it will be run for a period of time and then exit by itself, so how to ensure that it can always survive? The simple method is that the executable code can be executed forever, and the endless loop can guarantee that it will not be exited (eg: binder thread also adopts the endless loop method, and read and write operations with the Binder driver through a different loop method), of course it is not simple Infinite loop, sleep when there is no news. But this may raise another question, since it is an endless loop, how to deal with other affairs? By creating a new thread. The operation that will really freeze the main thread is that the operation time of the callback method onCreate/onStart/onResume is too long, which will cause frame drop, or even ANR, looper.loop itself will not cause the application to freeze.

Here is another one: Does the infinite loop of the main thread running all the time consume CPU resources?

Of course not. When MessageQueue has no messages, it is blocked in the nativePollOnce() method in the loop's queue.next(). At this time, the main thread will release the CPU resources and enter the dormant state until the next message arrives or a transaction occurs. Write data to the write end of the pipe to wake up the main thread to work. The epoll mechanism used here is an IO multiplexing mechanism that can monitor multiple descriptors at the same time. When a descriptor is ready (read or write ready), the corresponding program is immediately notified to perform read or write operations, essentially synchronous I/O, that is, reading and writing are blocked. Therefore, the main thread is dormant most of the time and does not consume a lot of CPU resources.