OkHttp source code analysis

OkHttp source code analysis

1.OkHttp introduction

square.github.io/okhttp/

OkHttp is a web request framework frequently used by Android, open sourced by Square. After Android 4.4, Google began to replace the underlying implementation of HttpURLConnection in the source code with OKHttp. At the same time, the bottom layer of the popular Retro t framework also uses OKHttp.

advantage

  • Support Http1, Http2, Quic and WebSocket
  • The connection pool reuses the underlying TCP (Socket) to reduce request delay
  • Seamlessly support GZIP to reduce data traffic
  • Cache response data to reduce repeated network requests
  • The request fails to automatically retry the other ip of the host, and automatically redirect

Instructions

2.OkHttp request process

Call flow

When using OkHttp to initiate a request, there are less three roles for the user: OkHttpClient, Request and Call. The creation of OkHttpClient and Request can use the Builder (builder mode) it provides us. Call is a ready-to-execute request returned after the Request is handed over to OkHttpClient.

Builder mode: Separate a complex construction from its representation, so that the same construction process can create different representations. When instantiating OKHttpClient and Request, because there are too many properties to set, and the developer s combination of requirements is ever-changing, using the builder mode allows users to not need to care about the internal details of this class. After configuration, the builder will help us Step-by-step initialization representation object

At the same time, the facade mode adopted by OkHttp in the design hides the complexity of the entire system and exposes the subsystem interface through a client OkHttpClient.

OkHttpClient is full of some configurations, such as proxy configuration, ssl certificate configuration, etc. And Call itself is an interface, the implementation we get is: RealCall

static RealCall newRealCall (OkHttpClient client, Request originalRequest, boolean forWebSocket) { //Safely publish the Call instance to the EventListener. RealCall call = new RealCall(client, originalRequest, forWebSocket); call.eventListener = client.eventListenerFactory().create(call); return call; } Copy code

Call of

execute
Represents a synchronization request , and
enqueue
It represents an asynchronous request . The only difference between the two is that one will directly initiate network requests, while the other uses OkHttp's built-in thread pool. This involves OkHttp's task dispatcher.

  • Distributor: internal maintenance queue and thread pool, complete request deployment;
  • Interceptors: The five default interceptors complete the entire request process

Distributor

Dispatcher, the dispatcher is to deploy the requested tasks, and it will contain a thread pool internally. When creating OkHttpClient, pass the thread pool defined by ourselves to create the dispatcher.

The members in this Dispatcher are:

//Asynchronous requests that simultaneously exist in large requests private int maxRequests = 64 ; //Asynchronous requests that simultaneously exist in the same domain name private int maxRequestsPerHost = 5 ; //Idle tasks (some tasks can be performed when there is no request, set by the user) private @Nullable Runnable idleCallback; //Thread pool used by asynchronous requests private @Nullable ExecutorService executorService; //Asynchronous request waiting for execution queue private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>(); //Asynchronous request is executing queue private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>(); //executing synchronization request queue Private Final the Deque <RealCall> = runningSyncCalls new new ArrayDeque <> (); duplicated code

Sync request

synchronized void executed (RealCall call) { runningSyncCalls.add(call); } Copy code

Asynchronous request

synchronized void enqueue (AsyncCall call) { //1. If the request being executed is less than 64 //2. The request of the same host cannot exceed 5 if (runningAsyncCalls.size() <maxRequests && runningCallsForHost(call) <maxRequestsPerHost) { runningAsyncCalls.add(call); //The request being executed executorService().execute(call); //Thread pool running task } else { readyAsyncCalls.add(call); } } Copy code

When the task being executed does not exceed the maximum limit of 64, at the same time

runningCallsForHost(call) <maxRequestsPerHost
No more than 5 requests for the same Host will be added to the executing queue and submitted to the thread pool at the same time. Otherwise, join the waiting queue first. There is nothing to say about joining the thread pool to execute directly, but if you join the waiting queue, you need to wait for a free place to start execution. So every time a request is executed, the finished method of the dispatcher will be called

//Asynchronous request call void finished (AsyncCall call) { finished(runningAsyncCalls, call, true ); } //Synchronous request call void finished (RealCall call) { finished(runningSyncCalls, call, false ); } private <T> void finished (Deque<T> calls, T call, boolean promoteCalls) { int runningCallsCount; Runnable idleCallback; synchronized ( this ) { //No matter asynchronous or synchronous, it must be removed from the queue after execution (runningSyncCalls/runningAsyncCalls) if (!calls.remove(call)) throw new AssertionError( "Call wasn't in-flight!" ); if (promoteCalls) promoteCalls(); //The sum of asynchronous tasks and synchronous tasks being executed runningCallsCount = runningCallsCount(); idleCallback = this .idleCallback; } //No task is executed to execute idle tasks if (runningCallsCount == 0 && idleCallback != null ) { idleCallback.run(); } } Copy code

It should be noted that only asynchronous tasks have limitations and waits, so after the execution of the elements in the executing queue is completed, the asynchronous task will be executed after the end

promoteCalls()
. Obviously this method will definitely redeploy the request.

/** * Get the task from ready and put it into running execution */ private void promoteCalls () { //If the task is full, return directly if (runningAsyncCalls.size() >= maxRequests) return ; //Already running max capacity. //There is no task waiting to be executed, return if (readyAsyncCalls.isEmpty ()) return ; //No ready calls to promote. //Traverse the waiting execution queue for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext();) { AsyncCall call = i.next(); //The same Host request can only have 5 at the same time if (runningCallsForHost(call) <maxRequestsPerHost) { i.remove(); runningAsyncCalls.add(call); executorService().execute(call); } if (runningAsyncCalls.size() >= maxRequests) return ; //Reached max capacity. } } Copy code

3. Highly concurrent request dispatcher and thread pool

Request flow

Users do not need to directly operate the task dispatcher, they are provided in the obtained RealCall

execute
versus
enqueue
To start a synchronous or asynchronous request .

@Override public Response execute () throws IOException { synchronized ( this ) { if (executed) throw new IllegalStateException( "Already Executed" ); executed = true ; } captureCallStackTrace(); eventListener.callStart( this ); try { //Call the dispatcher client.dispatcher().executed( this ); //initiate a request Response result = getResponseWithInterceptorChain(); if (result == null ) throw new IOException( "Canceled" ); return result; } catch (IOException e) { eventListener.callFailed( this , e); throw e; } finally { //The request is completed client.dispatcher().finished( this ); } } Copy code

The asynchronous request is followed by a call at the same time

getResponseWithInterceptorChain()
To execute the request

@Override public void enqueue (Callback responseCallback) { synchronized ( this ) { if (executed) throw new IllegalStateException( "Already Executed" ); executed = true ; } captureCallStackTrace(); eventListener.callStart( this ); //Call the dispatcher client.dispatcher().enqueue( new AsyncCall(responseCallback)); } Copy code

If the RealCall has already been executed, it is not allowed to execute it again. Asynchronous requests will submit an AsyncCall to the dispatcher. AsyncCall is actually a subclass of Runnable. When a Runnable is started using a thread, the run method will be executed, and it is redirected to the execute method in AsyncCall:

final class AsyncCall extends NamedRunnable { private final Callback responseCallback; AsyncCall(Callback responseCallback) { super ( "OkHttp %s" , redactedUrl()); this .responseCallback = responseCallback; } //Thread pool execution @Override protected void execute () { boolean signalledCallback = false ; try { Response response = getResponseWithInterceptorChain(); //....... } catch (IOException e) { //......} finally { //The request is completed client.dispatcher().finished( this ); } } } public abstract class NamedRunnable implements Runnable { protected final String name; public NamedRunnable (String format, Object... args) { this .name = Util.format(format, args); } @Override public final void run () { String oldName = Thread.currentThread().getName(); Thread.currentThread().setName(name); } try { execute(); } finally { Thread.currentThread().setName(oldName); } } protected abstract void execute () ; } Copy code

At the same time, AsyncCall is also a common internal class of RealCall, which means that it holds a reference to the external class RealCall and can obtain a method of directly calling the external class. You can see that regardless of whether it is a synchronous or asynchronous request, the actual execution of the requested work is actually

getResponseWithInterceptorChain()
in. This method is the core of OkHttp: the interceptor responsibility chain. But before introducing the chain of responsibility, let's review the basics of thread pools.

Distributor thread pool

As we mentioned earlier, the dispatcher is to deploy the requested tasks, and it will contain a thread pool internally. When an asynchronous request is made, the request task will be handed over to the thread pool for execution. How is the default thread pool defined in the distributor? Why is it defined this way?

public synchronized ExecutorService executorService () { if (executorService == null ) { executorService = new ThreadPoolExecutor( 0 , //Core thread Integer.MAX_VALUE, //Large thread 60 , //Idle thread idle time TimeUnit.SECONDS, //Idle time unit new SynchronousQueue<Runnable>(), //Thread waiting queue Util .threadFactory( "OkHttp Dispatcher" , false ) //Thread creation factory ); } return executorService; } Copy code

The thread pool in the OkHttp dispatcher is defined as above, which is actually the same as the thread created by Executors.newCachedThreadPool(). 1. the core thread is 0, which means that the thread pool will not always cache threads for us, and all threads in the thread pool will be recycled without work within 60 seconds. The combination of the large thread Integer.MAX_VALUE and the waiting queue SynchronousQueue can get a large throughput. That is, when the thread pool is required to perform tasks, if there is no idle thread, there is no need to wait, and a new thread is immediately created to perform the task! The different waiting queues specify different queuing mechanisms of the thread pool. Generally speaking, the waiting queue BlockingQueue has: ArrayBlockingQueue, LinkedBlockingQueue and SynchronousQueue.

Assuming that when submitting tasks to the thread pool, the core threads are all occupied:

  • ArrayBlockingQueue : Array-based blocking queue, initialization needs to specify a fixed size.

    When this queue is used, tasks submitted to the thread pool will first be added to the waiting queue. When the waiting queue is full, the task will be submitted again, and the attempt to join the queue will fail. At this time, it will check if the threads in the current thread pool are If the number does not reach the large thread, a new thread will be created to execute the newly submitted task. Therefore, the task submitted after the event may appear will be executed first, and the task submitted first has been waiting.

  • LinkedBlockingQueue : A blocking queue implemented based on a linked list. The size can be specified or not specified for initialization.

    When the size is specified, the behavior is consistent with ArrayBlockingQueu. If the size is not specified, the default Integer.MAX_VALUE will be used as the queue size. At this time, the large number of threads parameter of the thread pool will appear useless, because in any case, submitting a task to the thread pool to join the waiting queue will succeed. Ultimately means that all tasks are executed in the core thread. If the core thread has been occupied, it has been waiting.

  • SynchronousQueue : A queue with no capacity. Using this queue means that you want to get a large amount of concurrency. Because in any case, submit tasks to the thread pool, and submit tasks to the queue will fail. After the failure, if there are no idle non-core threads, it will check if the number of threads in the current thread pool has not reached the large thread, a new thread will be created to execute the newly submitted task. There is no waiting at all, and the only restriction is the number of large threads. Therefore, generally cooperate with Integer.MAX_VALUE to achieve true no-wait.

But when you need to pay attention, we all know that the memory of the process is limited, and each thread needs to allocate a certain amount of memory. Therefore, the number of threads cannot be unlimited. Then when the maximum number of threads is set to Integer.MAX_VALUE, OkHttp also has a limit of 64 for the execution number of large request tasks . In this way, this problem is solved and large throughput can be obtained at the same time.

Distributor summary

For synchronous requests, the distributor only records the request to determine whether IdleRunnable needs to be executed. For asynchronous requests, submit the request to the distributor:

  • Q1: How to decide whether to put the request into ready or running?

A: If the number of current requests is not less than 64, put them into ready; if it is less than 64, but there are 5 requests from the same domain host, put them into ready

  • Q2: What are the conditions for moving from ready to running?

    A: After each request is executed, it will be removed from running. At the same time, the same logical judgment as in the first step is performed to decide whether to move!

  • Q3: What is the working behavior of the distributor thread pool?

    A: No waiting, maximum concurrency

4. Request for Chain of Responsibility Mode

The core work of OkHttp is

getResponseWithInterceptorChain()
Before entering the analysis of this method, let's first understand what the chain of responsibility model is, because this method is to use the chain of responsibility model to complete step-by-step requests. The responsibility chain, as its name implies, is a chain composed of a series of responsible persons, similar to a factory assembly line, you know, this is how the boyfriends/girlfriends of many classmates come.

Chain of Responsibility Model

A chain of recipient objects is created for the request. This mode gives the type of request and decouples the sender and receiver of the request. In this model, usually each recipient contains a reference to another recipient. If an object cannot handle the request, it will pass the same request to the next recipient, and so on. such as:

The Qixi Festival has just passed. Classmate Zhou Zhou (I don't know why the first thing that comes to mind is Classmate Zhou Zhou) when he was studying, he was a single dog. After seeing many beautiful women in the study room every day, he went to self-study every night to do the same thing. Zhou Zhou sits in the back row of the study room every night and finds a piece of paper to write: "Hi, can you be my girlfriend? My specialty is very long. If you don't want to, please pass forward." The slips of paper were passed up one by one, and then passed to Auntie Sweeper. Hou and Auntie Sweeper lived a happy life. This is really a...happy story. So what does the whole process look like?

So what does the whole process look like?

public class Test1 { //The sender abstract class Transmit { //The next sender in the chain of responsibility protected Transmit nextTransmit; boolean request (String msg) ; public void setNextTransmit (Transmit transmit) { nextTransmit = transmit; } } public class Zero extends Transmit { public boolean request (String msg) { System.out.println( "Zero received the note and smiled knowingly " ); boolean resp = nextTransmit.request(msg); return resp; } } public class Alvin extends Transmit { public boolean request (String msg) { System.out.println( "Alvin received a note, heartbroken" ); boolean resp = nextTransmit.request(msg); return resp; } } public class Lucy extends Transmit { public boolean request (String msg) { System.out.println( "Aunt Lucy Wang Cuihua received the note, happy" ); return true ; } } private static Transmit getTransmits () { Transmit zero = new Zero(); Transmit alvin = new Alvin(); Lucy lucy = new Lucy(); zero.setNextTransmit(alvin); alvin.setNextTransmit(lucy); return errorLogger; } public static void main (String[] args) { Transmit transmit = getTransmits(); transmit.request( "Hi, can you be my girlfriend?" ); } } Copy code

In the chain of responsibility model, each object's references to its next home are connected to form a chain. The request is passed on this chain until an object on the chain decides to handle the request. The client does not know which object on the chain will eventually handle the request, and the system can dynamically reorganize the chain and assign responsibilities without affecting the client. The processor has two choices: take the responsibility or transfer the responsibility to the next person. A request may not be accepted by any recipient object.

Interceptor process

When the request needs to be executed, get the result of the request through getResponseWithInterceptorChain()```: Response

Response getResponseWithInterceptorChain () throws IOException { //Build a full stack of interceptors. List<Interceptor> interceptors = new ArrayList<>(); interceptors.addAll(client.interceptors()); interceptors.add(retryAndFollowUpInterceptor); interceptors.add( new BridgeInterceptor(client.cookieJar())); interceptors.add( new CacheInterceptor(client.internalCache())); interceptors.add( new ConnectInterceptor(client)); if (!forWebSocket) { interceptors.addAll(client.networkInterceptors()); } interceptors.add( new CallServerInterceptor(forWebSocket)); Interceptor.Chain chain = new RealInterceptorChain(interceptors, null , null , null , 0 , originalRequest, this , eventListener, client.connectTimeoutMillis(), client.readTimeoutMillis(), client.writeTimeoutMillis()); return chain.proceed(originalRequest); } Copy code

The process experienced is:

The request will be handed over to the interceptors in the chain of responsibility. There are five major interceptors by default:

  1. RetryAndFollowUpInterceptor: Retry the interceptor , before handing over (to the next interceptor), it is responsible for judging whether the user has cancelled the request; after obtaining the result, it will judge whether the redirection needs to be retryed according to the response code , if the conditions are met All interceptors will be restarted.
  2. BridgeInterceptor: Bridge Interceptor , before handing over, it is responsible for adding HTTP protocol necessary request headers (such as: Host) and adding some default behaviors (such as: GZIP compression); after obtaining the results, call the save cookie interface And parse the GZIP data.
  3. CacheInterceptor: Cache Interceptor , as the name implies, reads before handing over and judges whether to use the cache; after obtaining the result, judge whether to cache.
  4. ConnectInterceptor: The connection interceptor , before handing over, is responsible for finding or creating a connection and obtaining the corresponding socket stream; no additional processing is performed after the result is obtained.
  5. CallServerInterceptor: Request the server interceptor to communicate with the server, send data to the server, and parse the read response data.

5. 5.interceptors

Redirect and retry interceptor

The first interceptor: RetryAndFollowUpInterceptor, is mainly to accomplish two things: retry and redirection.

Retry

If RouteException or IOException occurs in the request phase, it will be judged whether to re-initiate the request.

RouteException

catch (RouteException e) { //todo routing exception, the connection is not successful, the request has not been sent out if (!recover(e.getLastConnectException(), streamAllocation, false , request)) { throw e.getLastConnectException(); } releaseConnection = false ; continue ; } Copy code

IOException

catch (IOException e) { //The todo request was sent, but the communication with the server failed. (The socket stream is disconnected while reading and writing data) //HTTP2 will throw ConnectionShutdownException. So for HTTP1 requestSendStarted must be true boolean requestSendStarted = !(e instanceof ConnectionShutdownException); if (!recover(e, streamAllocation, requestSendStarted, request)) throw e; releaseConnection = false ; continue ; } Copy code

Both exceptions are based on the recover method to determine whether a retry can be performed. If it returns true, it means that retry is allowed.

private boolean recover (IOException e, StreamAllocation streamAllocation, boolean requestSendStarted, Request userRequest) { streamAllocation.streamFailed(e); //todo 1. Retry is not allowed in OkhttpClient configuration (allowed by default), once the request fails, it will not retry if (!client.retryOnConnectionFailure()) return false ; //todo 2. If it is RouteException, do not care about this condition. //If it is IOException, because requestSendStarted may be false only in http2 io exceptions, it is mainly the second condition if (requestSendStarted && userRequest.body() instanceof UnrepeatableRequestBody) return false ; //todo 3. Determine whether it is a retry exception if (!isRecoverable(e, requestSendStarted)) return false ; //todo 4. There are no more routes if (!streamAllocation.hasMoreRoutes()) return false ; //For failure recovery, use the same route selector with a new connection. return true ; } Copy code

Therefore, first, on the premise that retrying is not prohibited, if some abnormality occurs and there are more routing lines, the user will try to change the line to retry the request. Some of these anomalies are in

isRecoverable
Make judgments in:

private boolean isRecoverable (IOException e, boolean requestSendStarted) { //A protocol exception occurs and cannot be retryed if (e instanceof ProtocolException) { return false ; } //If it is not a timeout exception, you cannot retry if (e instanceof InterruptedIOException) { return e instanceof SocketTimeoutException && !requestSendStarted; } //In the SSL handshake exception, there is a problem with the certificate and cannot be retried if (e instanceof SSLHandshakeException) { if (e.getCause() instanceof CertificateException) { return false ; } } //SSL handshake unauthorized exception cannot be retried if (e instanceof SSLPeerUnverifiedException) { return false ; } return true ; } Copy code
  1. Protocol exception : If it is, then it is directly judged that it cannot be retryed; (your request or the server's response itself has a problem, and the data is not defined in accordance with the http protocol, and it is useless to try again)
  2. Timeout exception : Socket connection timeout may be caused by network fluctuations. You can use a different route to try again.
  3. SSL certificate exception/SSL verification failure exception : The former is a certificate verification failure, the latter may be that there is no certificate at all, or the certificate data is incorrect, how can I try again?

After the abnormal determination, if retry is still allowed, it will check whether there is currently an available routing route to connect. In simple terms, for example, DNS may return multiple IPs after resolving the domain name. After one IP fails, try another IP to retry.

Redirect

If no exception occurs after the request ends, it does not mean that the response currently obtained will eventually need to be handed over to the user, and further judgments are needed to determine whether redirection is required. The judgment of redirection is located in

followUpRequest
method

private Request followUpRequest (Response userResponse) throws IOException { if (userResponse == null ) throw new IllegalStateException(); Connection connection = streamAllocation.connection(); Route route = connection != null ? Connection.route(): null ; int responseCode = userResponse.code(); final String method = userResponse.request().method(); switch (responseCode) { //407 The client uses an HTTP proxy server, add "Proxy-Authorization" to the request header to let the proxy server authorize case HTTP_PROXY_AUTH: Proxy selectedProxy = route != null ? route.proxy(): client.proxy(); if (selectedProxy.type() != Proxy.Type.HTTP) { throw new ProtocolException( "Received HTTP_PROXY_AUTH (407) code while not using proxy" ); } return client.proxyAuthenticator().authenticate(route, userResponse); //401 requires authentication. Some server interfaces need to verify user identity. Add "Authorization" case HTTP_UNAUTHORIZED in the request header : return client.authenticator().authenticate(route, userResponse); //308 permanent redirection //307 temporary redirection case HTTP_PERM_REDIRECT: case HTTP_TEMP_REDIRECT: //If the request method is not GET or HEAD, the framework will not automatically redirect the request if (!method.equals( "GET" ) && !method.equals( "HEAD" )) { return null ; } //300 301 302 303 case HTTP_MULT_CHOICE: case HTTP_MOVED_PERM: case HTTP_MOVED_TEMP: case HTTP_SEE_OTHER: //If the user does not allow redirection, then return null if (!client.followRedirects()) return null ; //Remove from the response header location String location = userResponse.header( "Location" ); if (location == null ) return null ; //Configure new request url according to location HttpUrl url = userResponse.request().url().resolve(location); //If it is null, it means that there is a problem with the protocol, and HttpUrl cannot be retrieved, then null will be returned and no redirection will be performed if (url == null ) return null ; //If the redirection is switched between http and https, you need to check Is the user allowed (allowed by default) boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme()); if (!sameScheme && !client.followSslRedirects()) return null ; Request.Builder requestBuilder = userResponse.request().newBuilder(); /** * As long as the redirect request is not a PROPFIND request, whether it is POST or other methods, it must be changed to a GET request method. * That is, only PROPFIND requests can have a request body */ //The request is not get and head if (HttpMethod.permitsRequestBody(method)) { final boolean maintainBody = HttpMethod.redirectsWithBody(method); //Change to a GET request except for the PROPFIND request if (HttpMethod.redirectsToGet(method) ) { requestBuilder.method( "GET" , null ); } else { RequestBody requestBody = maintainBody? UserResponse.request().body(): null ; requestBuilder.method(method, requestBody); } //It is not a request of PROPFIND, delete the data about the request body in the request header if (!maintainBody) { requestBuilder.removeHeader( "Transfer-Encoding" ); requestBuilder.removeHeader( "Content-Length" ); requestBuilder.removeHeader( "Content-Type" ); } } //When redirecting across hosts, delete the authentication request header if (!sameConnection(userResponse, url)) { requestBuilder.removeHeader( "Authorization" ); } return requestBuilder.url(url).build(); //408 client request timeout case HTTP_CLIENT_TIMEOUT: //408 is considered a connection failure, so it is judged whether the user is allowed to retry if (!client.retryOnConnectionFailure()) { return null ; } //UnrepeatableRequestBody actually did not find any other places used if (userResponse.request().body() instanceof UnrepeatableRequestBody) { return null ; } //If the response this time is the product of the re-request and the last time the re-request was due to 408, then we will not re-request this time if (userResponse.priorResponse() != null && userResponse.priorResponse() .code() == HTTP_CLIENT_TIMEOUT) { return null ; } //If the server tells us how long to retry after Retry-After, then the framework doesn't care. /if (retryAfter(userResponse, 0 )> 0 ) { return null ; } return userResponse.request(); //503 service unavailable is similar to 408, but only when the server tells you Retry-After:0 (meaning to retry immediately) will the request be re-requested case HTTP_UNAVAILABLE: if (userResponse.priorResponse() != null && userResponse.priorResponse().code() == HTTP_UNAVAILABLE) { return null ; } if (retryAfter(userResponse, Integer.MAX_VALUE) == 0 ) { return userResponse.request(); } return null ; default : return null ; } } Copy code

There are a lot of judgments about whether to redirect or not. I can't remember it. This is normal. The key is to understand what they mean. If this method returns empty, it means that there is no need to redirect and return the response directly; but if it returns non-empty, then the returned Request must be re-requested, but it should be noted that our followup is defined in the interceptor The maximum number of times is 20 times.

summary

This interceptor is the first in the entire responsibility chain, which means that it will be the role of the first contact with Request and after receiving Response. The main function of this interceptor is to determine whether retry and redirection are required. The prerequisite for retrying is the occurrence of RouteException or IOException. Once these two exceptions occur in the subsequent execution of the interceptor, the recover method will be used to determine whether to retry the connection. Redirection occurs after the determination of the retry. If the retry conditions are not met, you need to further call followUpRequest according to the response code of the Response (of course, if the direct request fails, an exception will be thrown if the Response does not exist). Followup occurred 20 times.

Bridge interceptor

BridgeInterceptor, the bridge connecting the application and the server, the request we send will be processed by it before being sent to the server, such as setting the request content length, encoding, gzip compression, cookie, etc., and saving the cookie after obtaining the response. This interceptor is relatively simple.

Completion request header:

Request headerDescription
Content-TypeRequest body type, such as: application/x-www-form-urlencoded
Content-Length/Transfer-EncodingRequest body parsing method
HostRequested host site
Connection: Keep-AliveKeep a long connection
Accept-Encoding: gzipAccept response support gzip compression
Cookiescookie identification
User-AgentRequested user information, such as: operating system, browser, etc.

After completing the request header, hand it over to the next interceptor for processing. After getting the response, there are two main things to do:

  1. Save the cookie. In the next request, the corresponding data setting will be read into the request header. The default CookieJar does not provide an implementation.
  2. If you use the data returned by gzip, use GzipSource to wrap it for easy parsing.

Summarizing the execution logic of the bridge interceptor is mainly as follows: Add or delete related header information to the Request constructed by the user to transform it into a Request that can actually make a network request. Submit the Request that meets the network request specification to the next interceptor. Process and get the Response. If the response body has been compressed by GZIP, it needs to be decompressed, then constructed into a user-available Response and returned.

Cache interceptor

CacheInterceptor, before making a request, judge whether it hits the cache. If it hits, you can skip the request and use the cached response directly. (Only the cache for the Get request will exist) The steps are:

  1. Obtain the response cache of the corresponding request from the cache
  2. When creating a CacheStrategy, it will determine whether the cache can be used. There are two members in the CacheStrategy: networkRequest and cacheResponse. Their combination is as follows:
networkRequestcacheResponseDescription
NullNot NullUse cache directly
Not NullNullInitiate a request to the server
NullNullDirect gg, okhttp directly returns 504
Not NullNot NullInitiate a request, if the response is 304 (no modification), update the cached response and return
  1. Hand over to the next chain of responsibility to continue processing
  2. In the follow-up work, if a 304 is returned, the cached response is used; otherwise, the network response is used and the response is cached (only the response of the Get request is cached)

The work of the cache interceptor is relatively simple, but the specific implementation requires a lot of content. In the cache interceptor, it is judged whether the cache can be used or the request server is judged by CacheStrategy.

Caching strategy

CacheStrategy. 1. you need to know several request headers and response headers

Response headerDescriptionexample
DateThe time the message was sentDate: Sat, 18 Nov 2028 06:17:41 GMT
ExpiresResource expiration timeExpires: Sat, 18 Nov 2028 06:17:41 GMT
LastModi edModification time after resourceLast-Modi ed: Fri, 22 Jul 2016 02:57:17 GMT
ETagThe unique identifier of the resource on the serverETag: "16df0-5383097a03d40"
AgeThe server responds to the request with a cache, how long has passed since the cache was created (seconds)Age: 3825683
Cache-Control
Request headerDescriptionexample
If-Modified-SinceThe server did not modify the requested resource after the specified time, and returned 304 (no modification)If-Modi ed-Since: Fri, 22 Jul 2016 02:57:17 GMT
If-None-MatchThe server compares it with the Etag value of the requested resource, and returns 304 if it matchesIf-None-Match: "16df05383097a03d40"
Cache-Control

Among them, Cache-Control can exist in the request header or the response header, and the corresponding value can be set in multiple combinations:

  1. max-age=[sec]: the maximum effective time of the resource;
  2. public: Indicates that the resource can be cached by any user, such as the client, proxy server, etc. can cache the resource;
  3. private: Indicates that the resource can only be cached by a single user, and the default is private.
  4. no-store: resources are not allowed to be cached
  5. no-cache: (request) do not use cache
  6. immutable: (response) the resource will not change
  7. min-fresh=[seconds]: (request) the small freshness of the cache (the length of time the user thinks the cache is effective)
  8. must-revalidate: (Response) Expired cache is not allowed 9. max-stale=[sec]: (Request) How long will the cache expire

Suppose there is max-age=100, min-fresh=20. This means that the user believes that the cached response is 100-20=80s from the time the server creates the response to the time it can be cached. But if max-stale=100. This means that after the effective time of the cache is 80s, it is still allowed to be used for 100s, which can be regarded as the effective time of the cache as 180s.

Connection interceptor

ConnectInterceptor, open the connection with the target server, and execute the next interceptor. It can be posted here in short and in its entirety:

public final class ConnectInterceptor implements Interceptor { public final OkHttpClient client; public ConnectInterceptor (OkHttpClient client) { this .client = client; } @Override public Response intercept (Chain chain) throws IOException { RealInterceptorChain realChain = (RealInterceptorChain) chain; Request request = realChain.request(); StreamAllocation streamAllocation = realChain.streamAllocation(); //We need the network to satisfy this request. Possibly for validating a conditional GET. boolean doExtensiveHealthChecks = !request.method().equals( "GET" ); HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks); RealConnection connection = streamAllocation.connection(); return realChain.proceed(request, streamAllocation, httpCodec, connection); } } Copy code

Although the amount of code is very small, in fact most of the functions are encapsulated in other classes, which are just calls. First of all, the StreamAllocation object we saw was created in the first interceptor: the redirection interceptor, but it is actually used here. "When a request is sent, a connection needs to be established, and after the connection is established, the stream needs to be used to read and write data"; and this StreamAllocation is to coordinate the relationship between the request, the connection and the data stream. It is responsible for finding a connection for a request, and then obtaining Stream to achieve network communication. The newStream method used here is actually to find or establish a valid connection with the requesting host. The returned HttpCodec contains input and output streams, and encapsulates the encoding and decoding of the HTTP request message. You can directly use it to communicate with the request. The host completes HTTP communication. StreamAllocation is simply to maintain the connection: RealConnection-encapsulates the Socket and a Socket connection pool. Reusable RealConnection requires:

public boolean isEligible (Address address, @Nullable Route route) { //If this connection is not accepting new streams, we're done. //todo is actually in use (for http1.1) and cannot be reused if (allocations .size() >= allocationLimit || noNewStreams) return false ; //If the non-host fields of the address don't overlap, we're done. //todo If the addresses are different, they cannot be reused. Including the configured dns, proxy, certificate, port, etc. (the domain name has not been determined yet, so the domain name will be determined immediately below) if (!Internal.instance.equalsNonHost( this .route.address(), address)) return false ; //If the host exactly matches, we're done: this connection can carry the address. //todo is the same, then you can reuse if (address.url().host().equals( this .route( ).address().url().host())) { return true ; //This connection is a perfect match. } //At this point we don't have a hostname match. But we still be able to carry the //request if //our connection coalescing requirements are met. See also: //https://hpbn.co/optimizing- application-delivery/#eliminate-domain-sharding //https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/ //1. This connection must be HTTP/2. if (http2Connection == null ) return false ; //2. The routes must share an IP address. This requires us to have a DNS address for both //hosts, which only happens after route planning. We can't coalesce connections that use a //proxy, since proxies don' t tell us the origin server's IP address. if (route == null ) return false ; if (route.proxy().type() != Proxy.Type.DIRECT) return false ; if ( this .route.proxy() .type() != Proxy.Type.DIRECT) return false ; if (! this .route.socketAddress().equals(route.socketAddress())) return false ; //3. This connection's server certificate's must cover the new host. if (route.address().hostnameVerifier() != OkHostnameVerifier.INSTANCE) return false ; if (!supportsUrl(address.url())) return false ; //4. Certificate pinning must match the host. try { address.certificatePinner().check(address.url().host(), handshake().peerCertificates()); } catch (SSLPeerUnverifiedException e) { return false ; } return true ; //The caller's address can be carried by this connection. } Copy code
  1. The connection reaches a large concurrent flow or the connection is not allowed to establish a new flow; for example, the connection being used by http1.x cannot be used by others (large concurrent flow is: 1) or the connection is closed; then reuse is not allowed;

    if (allocations.size() >= allocationLimit || noNewStreams) return false ; copy the code
  2. DNS, proxy, SSL certificate, server domain name and port can be reused if they are identical;

    if (!Internal.instance.equalsNonHost( this .route.address(), address)) return false ; if (address.url().host().equals( this .route().address().url() .host())) { return true ; //This connection is a perfect match. } Copy code

    If the above conditions are not met, it may still be reused in some scenarios of HTTP/2 (http2 does not matter).

So in summary, if you find a connection in the connection pool that has the same connection parameters and is not closed and is not occupied, you can reuse it.

Connection pool process

Connection pool cleaning

Summary All implementations in this interceptor are to obtain a connection with the target server, and send and receive HTTP data on this connection.

Request service interceptor

CallServerInterceptor uses HttpCodec to send a request to the server and parse it to generate a Response. First call httpCodec.writeRequestHeaders(request); to write the request headers into the cache (it is not really sent to the server until flushRequest() is called). Then immediately make the first logical judgment

Response.Builder responseBuilder = null ; if (HttpMethod.permitsRequestBody(request.method())&&request.body()!= null ){ //If there's a "Expect: 100-continue" header on the request, wait for a " HTTP/1.1 100 //Continue" response before transmitting the request body. If we don't get that, return //what we did get (such as a 4xx response) without ever transmitting the request body. if ( "100-continue " .equalsIgnoreCase(request.header( "Expect" ))) { httpCodec.flushRequest(); realChain.eventListener().responseHeadersStart(realChain.call()); responseBuilder = httpCodec.readResponseHeaders( true ); } if (responseBuilder == null ) { //Write the request body if the "Expect: 100-continue" expectation was met. realChain.eventListener().requestBodyStart(realChain.call()); long contentLength = request.body().contentLength(); CountingSink requestBodyOut = new CountingSink(httpCodec.createRequestBody(request, contentLength)); BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut); request.body().writeTo(bufferedRequestBody); bufferedRequestBody.close(); realChain.eventListener().requestBodyEnd(realChain.call(), requestBodyOut.successfulCount); } else if (!connection.isMultiplexed()) { //HTTP2 multiplexing, no need to close the socket, no matter! //If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 //connection //from being reused. Otherwise we're still obligated to transmit the request //body to //leave the connection in a consistent state. streamAllocation.noNewStreams(); } } httpCodec.finishRequest(); Copy code

The entire if is related to a request header: Expect: 100-continue. This request header represents the need to determine with the server whether or not to accept the request body sent by the client before sending the request body. So permitsRequestBody determines whether it will carry the request body (POST). If it hits the if, it will first query the server whether it is willing to receive the request body. At this time, if the server is willing to respond 100 (without a response body, responseBuilder is nul ). Only then can the remaining request data continue to be sent.

But if the server does not agree to accept the request body, then we need to mark that the connection can no longer be reused and call noNewStreams() to close the related Socket. The subsequent code is:

if (responseBuilder == null ){ realChain.eventListener().responseHeadersStart(realChain.call()); responseBuilder = httpCodec.readResponseHeaders( false ); } Response response = responseBuilder .request(request) .handshake(streamAllocation.connection().handshake()) .sentRequestAtMillis(sentRequestMillis) .receivedResponseAtMillis(System.currentTimeMillis()) .build(); Copy code

The situation of responseBuilder at this time is:

  1. POST request, the request header contains Expect, the server allows to accept the request body, and the request body has been sent, responseBuilder is null;
  2. POST request, the request header contains Expect, the server is not allowed to accept the request body, and responseBuilder is not null
  3. For POST request, if Expect is not included, the request body is sent directly, and responseBuilder is null;
  4. POST request, no request body, responseBuilder is null;
  5. GET request, responseBuilder is null;

Corresponding to the above 5 cases, read the response header and compose the response Response. Note: This Response does not have a response body. At the same time, it should be noted that if the server accepts Expect: 100-continue, does it mean that we have initiated two Requests? The response header at this time is the first query for whether the server supports accepting the request body, not the result response corresponding to the actual request. So immediately:

int code = response.code(); if (code == 100 ){ //server sent a 100-continue even though we did not request one. //try again to read the actual response responseBuilder = httpCodec.readResponseHeaders( false ) ; response = responseBuilder .request(request) .handshake(streamAllocation.connection().handshake()) .sentRequestAtMillis(sentRequestMillis) .receivedResponseAtMillis(System.currentTimeMillis()).build(); code = response.code(); } Copy code

If the response is 100, it represents a successful response to the request Expect: 100-continue, and you need to read a response header again immediately. This is the actual response header corresponding to the request.

Then finish

if (forWebSocket &&code == 101 ){ //Connection is upgrading, but we need to ensure interceptors see a non-null //response body. response = response.newBuilder().body(Util.EMPTY_RESPONSE).build(); } else { response = response.newBuilder().body(httpCodec.openResponseBody(response)).build(); } if ( "close" .equalsIgnoreCase(response.request().header( "Connection" )) || "close" .equalsIgnoreCase(response.header( "Connection" ))){ streamAllocation.noNewStreams(); } if ((code == 204 ||code == 205 )&&response.body(). contentLength()> 0 ){ throw new ProtocolException( "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength()); } return response; copy code

forWebSocket represents the request of websocket, we go directly to the else, here is to read the response body data. Then determine if both the request and the server want a long connection. Once one of the parties specifies close, then the socket needs to be closed. If the server returns 204/205, generally speaking, these return codes will not exist, but once they appear, this means that there is no response body, but the parsed response header contains Content-Lenght and is not 0. This indicates the response body Data byte length. At this time there is a conflict, and the protocol exception is thrown directly! Summary In this interceptor, the encapsulation and analysis of HTTP protocol messages are completed.

6.OkHttp summary

The realization of the entire OkHttp function is in these five default interceptors, so understanding the working mechanism of the interceptor mode is a prerequisite. The five interceptors are: retry interceptor, bridge interceptor, cache interceptor, connection interceptor, and request service interceptor. Each interceptor is responsible for a different job, just like a factory assembly line. After these five processes, the final product is completed. But unlike the pipeline, the interceptor in OkHttp will do some things before handing it over to the next interceptor every time it initiates a request, and do some things after getting the result. The whole process is sequential in the request direction, and in the reverse order in the response direction. When the user initiates a request, the task is initiated by the Dispatcher to package the request and hand it over to the retry interceptor for processing.

  1. Before handing over (to the next interceptor), the retry interceptor is responsible for judging whether the user has cancelled the request; after obtaining the result, it will judge whether it needs to be redirected according to the response code, and if the conditions are met, it will restart and execute all Interceptor.
  2. Before handing over, the bridge interceptor is responsible for adding the necessary request headers of the HTTP protocol (such as: Host) and adding some default behaviors (such as: GZIP compression); after obtaining the results, call the save cookie interface and parse GZIP data.
  3. As the name implies, the cache interceptor reads before handing over and determines whether to use the cache; after obtaining the result, it determines whether to cache.
  4. Before handing over, the connection interceptor is responsible for finding or creating a connection and obtaining the corresponding socket stream; no additional processing is performed after the result is obtained.
  5. Request the server interceptor to communicate with the server, send data to the server, and parse the read response data.

After going through this series of processes, an HTTP request is completed!

7. Supplement: Agent

When using OkHttp, if the user configures proxy or proxySelector when creating OkHttpClient, the configured proxy will be used, and the proxy priority is higher than proxySelector. If it is not configured, the agent configured by the machine will be obtained and used.

//JDK: ProxySelector try { URI uri = new URI( "http://restapi.amap.com" ); List<Proxy> proxyList = ProxySelector.getDefault().select(uri); System.out.println(proxyList.get( 0 ).address()); System.out.println(proxyList.get( 0 ).type()); } catch ( URISyntaxException e){ e.printStackTrace(); } Copy code

Therefore, if we do not need to use a proxy for requests in our App, we can configure a proxy(Proxy.NO_PROXY), which can also avoid being captured. The definition of NO_PROXY is as follows:

public static final Proxy NO_PROXY = new Proxy(); private Proxy () { this .type = Proxy.Type.DIRECT; this .sa = null ; } Copy code

There are three types of abstract classes corresponding to agents in Java:

public static enum Type { DIRECT, HTTP, SOCKS; private Type () { } } Copy code
  • DIRECT: No agent
  • HTTP: http proxy
  • SOCKS: socks proxy

Needless to say the first one, but what is the difference between Http proxy and Socks proxy?

For the Socks proxy, in the HTTP scenario, the proxy server completes the forwarding of TCP data packets; while the Http proxy server, in addition to forwarding data, also parses the HTTP request and response, and does some processing according to the content of the request and response .

RealConnection's connectSocket method:

//If it is a Socks proxy, then new Socket(proxy); otherwise, there is no proxy or http proxy, address.socketFactory().createSocket(), which is equivalent to direct: newSocket() rawSocket = proxy.type()==Proxy.Type.DIRECT ||proxy.type()==Proxy.Type.HTTP ?address.socketFactory().createSocket(): new Socket(proxy); //connect socket.connect(address);

SOCKS Socket proxy HTTP Socket SOCKS Http Socket Http

connect address inetSocketAddresses RouteSelector resetNextInetSocketAddress

When setting up a proxy, the domain name resolution of the Http server will be handed over to the proxy server for execution. But if the Http proxy is set, the dns resolution proxy server configured by OkhttpClient will be used for the domain name of the Http proxy server, and the domain name resolution of the Http server will be handed over to the proxy server for resolution.
The above code is the use of proxy and DNS in OkHttp, but there is another point to note, Http proxy is also divided into two types: ordinary proxy and tunnel proxy. Among them, ordinary agents do not require additional operations, and play the role of "middleman", passing messages back and forth between the two ends. When this "middleman" receives the request message sent by the client, it needs to correctly process the request and connection status, and send a new request to the server at the same time. After receiving the response, the response result is packaged into a response body and returned to the client end. In the process of ordinary agency, both ends of the agency may not be aware of the existence of the "middleman". However, the tunnel agent no longer acts as an intermediary and cannot rewrite the client's request, but only forwards the client's request to the terminal server through the established tunnel after the connection is established. The tunnel proxy needs to initiate an Http CONNECT request. This request method has no request body and is only used by the proxy server and will not be passed to the terminal server. Once the request header is over, all subsequent data is regarded as data that should be forwarded to the terminal server, and the proxy needs to forward them without thinking until the TCP read channel from the client is closed. The CONNECT response message can return a 200 Connect established status code to the client after the connection between the proxy server and the terminal server is established to indicate that the connection with the terminal server is established successfully.

Copy code

RealConnection's connect method

Copy code

The judgment of the requiresTunnel method is: the current request is https and there is an http proxy, then connectTunnel will initiate:

Copy code

If the connection is successful, the proxy server will return 200; if it returns 407, it means that the proxy server needs authentication (such as a paid proxy), and then Proxy-Authorization needs to be added to the request header:

Copy code