okHttp-拦截器们

上回我们说到 RealInterceptorChain#proceed 依次执行拦截器的intercept方法,那我们就逐个看看这些拦截器都是怎么样实现拦截方法的

getResponseWithInterceptorChain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Response getResponseWithInterceptorChain() throws IOException {
// Build a full stack of interceptors.
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());
interceptors.add(retryAndFollowUpInterceptor);
interceptors.add(new BridgeInterceptor(client.cookieJar()));
interceptors.add(new CacheInterceptor(client.internalCache()));
interceptors.add(new ConnectInterceptor(client));
if (!forWebSocket) {
interceptors.addAll(client.networkInterceptors());
}
interceptors.add(new CallServerInterceptor(forWebSocket));

Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
originalRequest, this, eventListener, client.connectTimeoutMillis(),
client.readTimeoutMillis(), client.writeTimeoutMillis());

return chain.proceed(originalRequest);
}

通过代码我们可以知道,拦截器列表里面最先添加的是用户自己定义在OkHttpClient里面的拦截器。看下面这图,没错拦截器有两种一个application级别的,另一个是网络底层级别的。也就是我们现在的这一步只是获取application层的去回调intercept

拦截器图

Request在经历了我们自己写的拦截器后(如果没被处理return响应)会继续向下执行,现在我们就看看这些拦截到底有什么用。

retryAndFollowUpInterceptor

RetryAndFollowUpInterceptor 的作用主要是负责处理网络请求失败后的重试以及请求的重定向跳转。流程如下:

  1. 通过client的Connection(connection是联通服务器的一条socket连接)池、request创建的Address、call等new出一个StreamAllocation对象,StreamAllocation是流Streams(socket连接上运输的数据流,HTTP/1.x只允许每个连接上运输一条流,HTTP/2可以多条)分配者。
  2. 进入循环,获取拦截链逐层允许后返回的Response,通过followUpRequest(response)根据响应码、重定向Url重新创建重定向Request。如果Request为空代表不给重定向,则释放流分配者streamAllocation,closeQuietly(关闭IO流),抛出Response给上层(也就是execute或enqueue)的Call,否则继续。
  3. 判断重定向的次数是否>20(大于就释放streamAllocation)、判断请求是否可重复发送、判断是否可重用Connection链接(同域名同端口同协议可重用),跳到第二步继续循环。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
@Override public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Call call = realChain.call();
EventListener eventListener = realChain.eventListener();

streamAllocation = new StreamAllocation(client.connectionPool(), createAddress(request.url()),
call, eventListener, callStackTrace);

int followUpCount = 0;
Response priorResponse = null;
while (true) {
if (canceled) {
streamAllocation.release();
throw new IOException("Canceled");
}

Response response;
boolean releaseConnection = true;
try {
response = realChain.proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), false, request)) {
throw e.getLastConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
if (!recover(e, requestSendStarted, request)) throw e;
releaseConnection = false;
continue;
} finally {
// We're throwing an unchecked exception. Release any resources.
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}

// Attach the prior response if it exists. Such responses never have a body.
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}

Request followUp = followUpRequest(response);

if (followUp == null) {
if (!forWebSocket) {
streamAllocation.release();
}
return response;
}

closeQuietly(response.body());

if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}

if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}

if (!sameConnection(response, followUp.url())) {
streamAllocation.release();
streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(followUp.url()), call, eventListener, callStackTrace);
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}

request = followUp;
priorResponse = response;
}
}

retryAndFollowUpInterceptor作为最上层的拦截器,他主要做了重定向的事,这里需要注意的就realChain.proceed 是会把请求交给下层的一堆拦截器逐层处理,做完了才返回Response,和android的Touch事件传递机制有点类似。

BridgeInterceptor

BridgeInterceptor 主要干两件事,补全、处理请求缺省的首部字段、加工底下拦截器返回的响应(setCookie、gzip),我们详细看看:

  1. 填充请求内容的类型
  2. 设置请求传输内容大小以及是否分块,注意Content-Length(确定的大小)和Transfer-Encoding(分块)互斥
  3. 填充 Host、连接Keep-Alive、Accept-Encoding压缩方式默认Gzip、User-Agent用户代理UA
  4. 放入Cookie,这里注意CookieJar是创建OkHttpClient时传入的
  5. Build填充好的请求,交给chain.proceed,底下各层拦截器处理完成,返回Respond
  6. cookieJar保存响应首部的Set-Cookie及对应域名,如果响应用的压缩方式==Gzip,解压响应Body,移除响应首部的Content-Encoding、Content-Length(因为已经解压),Build修改好的响应,抛给上层的拦截器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@Override public Response intercept(Chain chain) throws IOException {
Request userRequest = chain.request();
Request.Builder requestBuilder = userRequest.newBuilder();

RequestBody body = userRequest.body();
if (body != null) {
MediaType contentType = body.contentType();
if (contentType != null) {
requestBuilder.header("Content-Type", contentType.toString());
}

long contentLength = body.contentLength();
if (contentLength != -1) {
requestBuilder.header("Content-Length", Long.toString(contentLength));
requestBuilder.removeHeader("Transfer-Encoding");
} else {
requestBuilder.header("Transfer-Encoding", "chunked");
requestBuilder.removeHeader("Content-Length");
}
}

if (userRequest.header("Host") == null) {
requestBuilder.header("Host", hostHeader(userRequest.url(), false));
}

if (userRequest.header("Connection") == null) {
requestBuilder.header("Connection", "Keep-Alive");
}

// If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
// the transfer stream.
boolean transparentGzip = false;
if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
transparentGzip = true;
requestBuilder.header("Accept-Encoding", "gzip");
}

List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
if (!cookies.isEmpty()) {
requestBuilder.header("Cookie", cookieHeader(cookies));
}

if (userRequest.header("User-Agent") == null) {
requestBuilder.header("User-Agent", Version.userAgent());
}

Response networkResponse = chain.proceed(requestBuilder.build());

HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());

Response.Builder responseBuilder = networkResponse.newBuilder()
.request(userRequest);

if (transparentGzip
&& "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
&& HttpHeaders.hasBody(networkResponse)) {
GzipSource responseBody = new GzipSource(networkResponse.body().source());
Headers strippedHeaders = networkResponse.headers().newBuilder()
.removeAll("Content-Encoding")
.removeAll("Content-Length")
.build();
responseBuilder.headers(strippedHeaders);
responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
}

return responseBuilder.build();
}

CacheInterceptor

CacheInterceptor 是缓存,也就是

对于Http的缓存一般有两种:服务端(缓存服务器)和客户端(Web浏览器,App)

  • 服务器缓存:缓存服务器会缓存源服务的静态数据,如果在一个缓存时间内有新的请求就直接返回(CDN)
  • 客户端缓存:客户端应用程序会缓存自己请求和响应,如果在一个缓存时间内需要相同的数据就从缓存里拿

Cache 是创建OkHttpClient时传入的缓存对象,其内部是用DiskLruCache维护需要需要缓存的响应。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
  /** The request to send on the network, or null if this call doesn't use the network. */
public final @Nullable Request networkRequest;

/** The cached response to return or validate; or null if this call doesn't use a cache. */
public final @Nullable Response cacheResponse;

CacheStrategy(Request networkRequest, Response cacheResponse) {
this.networkRequest = networkRequest;
this.cacheResponse = cacheResponse;
}

/** Returns true if {@code response} can be stored to later serve another request. */
public static boolean isCacheable(Response response, Request request) {
// Always go to network for uncacheable response codes (RFC 7231 section 6.1),
// This implementation doesn't support caching partial content.
switch (response.code()) {
case HTTP_OK:
case HTTP_NOT_AUTHORITATIVE:
case HTTP_NO_CONTENT:
case HTTP_MULT_CHOICE:
case HTTP_MOVED_PERM:
case HTTP_NOT_FOUND:
case HTTP_BAD_METHOD:
case HTTP_GONE:
case HTTP_REQ_TOO_LONG:
case HTTP_NOT_IMPLEMENTED:
case StatusLine.HTTP_PERM_REDIRECT:
// These codes can be cached unless headers forbid it.
break;

case HTTP_MOVED_TEMP:
case StatusLine.HTTP_TEMP_REDIRECT:
// These codes can only be cached with the right response headers.
// http://tools.ietf.org/html/rfc7234#section-3
// s-maxage is not checked because OkHttp is a private cache that should ignore s-maxage.
if (response.header("Expires") != null
|| response.cacheControl().maxAgeSeconds() != -1
|| response.cacheControl().isPublic()
|| response.cacheControl().isPrivate()) {
break;
}
// Fall-through.

default:
// All other codes cannot be cached.
return false;
}

// A 'no-store' directive on request or response prevents the response from being cached.
return !response.cacheControl().noStore() && !request.cacheControl().noStore();
}

public static class Factory {
final long nowMillis;
final Request request;
final Response cacheResponse;

/** The server's time when the cached response was served, if known. */
private Date servedDate;
private String servedDateString;

/** The last modified date of the cached response, if known. */
private Date lastModified;
private String lastModifiedString;

/**
* The expiration date of the cached response, if known. If both this field and the max age are
* set, the max age is preferred.
*/
private Date expires;

/**
* Extension header set by OkHttp specifying the timestamp when the cached HTTP request was
* first initiated.
*/
private long sentRequestMillis;

/**
* Extension header set by OkHttp specifying the timestamp when the cached HTTP response was
* first received.
*/
private long receivedResponseMillis;

/** Etag of the cached response. */
private String etag;

/** Age of the cached response. */
private int ageSeconds = -1;

public Factory(long nowMillis, Request request, Response cacheResponse) {
this.nowMillis = nowMillis;
this.request = request;
this.cacheResponse = cacheResponse;

if (cacheResponse != null) {
this.sentRequestMillis = cacheResponse.sentRequestAtMillis();
this.receivedResponseMillis = cacheResponse.receivedResponseAtMillis();
Headers headers = cacheResponse.headers();
for (int i = 0, size = headers.size(); i < size; i++) {
String fieldName = headers.name(i);
String value = headers.value(i);
if ("Date".equalsIgnoreCase(fieldName)) {
servedDate = HttpDate.parse(value);
servedDateString = value;
} else if ("Expires".equalsIgnoreCase(fieldName)) {
expires = HttpDate.parse(value);
} else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
lastModified = HttpDate.parse(value);
lastModifiedString = value;
} else if ("ETag".equalsIgnoreCase(fieldName)) {
etag = value;
} else if ("Age".equalsIgnoreCase(fieldName)) {
ageSeconds = HttpHeaders.parseSeconds(value, -1);
}
}
}
}

/**
* Returns a strategy to satisfy {@code request} using the a cached response {@code response}.
*/
public CacheStrategy get() {
CacheStrategy candidate = getCandidate();

if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
// We're forbidden from using the network and the cache is insufficient.
return new CacheStrategy(null, null);
}

return candidate;
}

/** Returns a strategy to use assuming the request can use the network. */
private CacheStrategy getCandidate() {
// No cached response.
if (cacheResponse == null) {
return new CacheStrategy(request, null);
}

// Drop the cached response if it's missing a required handshake.
if (request.isHttps() && cacheResponse.handshake() == null) {
return new CacheStrategy(request, null);
}

// If this response shouldn't have been stored, it should never be used
// as a response source. This check should be redundant as long as the
// persistence store is well-behaved and the rules are constant.
if (!isCacheable(cacheResponse, request)) {
return new CacheStrategy(request, null);
}

CacheControl requestCaching = request.cacheControl();
if (requestCaching.noCache() || hasConditions(request)) {
return new CacheStrategy(request, null);
}

CacheControl responseCaching = cacheResponse.cacheControl();
if (responseCaching.immutable()) {
return new CacheStrategy(null, cacheResponse);
}

long ageMillis = cacheResponseAge();
long freshMillis = computeFreshnessLifetime();

if (requestCaching.maxAgeSeconds() != -1) {
freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
}

long minFreshMillis = 0;
if (requestCaching.minFreshSeconds() != -1) {
minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}

long maxStaleMillis = 0;
if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}

if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
Response.Builder builder = cacheResponse.newBuilder();
if (ageMillis + minFreshMillis >= freshMillis) {
builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
}
long oneDayMillis = 24 * 60 * 60 * 1000L;
if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
}
return new CacheStrategy(null, builder.build());
}

// Find a condition to add to the request. If the condition is satisfied, the response body
// will not be transmitted.
String conditionName;
String conditionValue;
if (etag != null) {
conditionName = "If-None-Match";
conditionValue = etag;
} else if (lastModified != null) {
conditionName = "If-Modified-Since";
conditionValue = lastModifiedString;
} else if (servedDate != null) {
conditionName = "If-Modified-Since";
conditionValue = servedDateString;
} else {
return new CacheStrategy(request, null); // No condition! Make a regular request.
}

Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);

Request conditionalRequest = request.newBuilder()
.headers(conditionalRequestHeaders.build())
.build();
return new CacheStrategy(conditionalRequest, cacheResponse);
}

/**
* Returns the number of milliseconds that the response was fresh for, starting from the served
* date.
*/
private long computeFreshnessLifetime() {
CacheControl responseCaching = cacheResponse.cacheControl();
if (responseCaching.maxAgeSeconds() != -1) {
return SECONDS.toMillis(responseCaching.maxAgeSeconds());
} else if (expires != null) {
long servedMillis = servedDate != null
? servedDate.getTime()
: receivedResponseMillis;
long delta = expires.getTime() - servedMillis;
return delta > 0 ? delta : 0;
} else if (lastModified != null
&& cacheResponse.request().url().query() == null) {
// As recommended by the HTTP RFC and implemented in Firefox, the
// max age of a document should be defaulted to 10% of the
// document's age at the time it was served. Default expiration
// dates aren't used for URIs containing a query.
long servedMillis = servedDate != null
? servedDate.getTime()
: sentRequestMillis;
long delta = servedMillis - lastModified.getTime();
return delta > 0 ? (delta / 10) : 0;
}
return 0;
}

/**
* Returns the current age of the response, in milliseconds. The calculation is specified by RFC
* 2616, 13.2.3 Age Calculations.
*/
private long cacheResponseAge() {
long apparentReceivedAge = servedDate != null
? Math.max(0, receivedResponseMillis - servedDate.getTime())
: 0;
long receivedAge = ageSeconds != -1
? Math.max(apparentReceivedAge, SECONDS.toMillis(ageSeconds))
: apparentReceivedAge;
long responseDuration = receivedResponseMillis - sentRequestMillis;
long residentDuration = nowMillis - receivedResponseMillis;
return receivedAge + responseDuration + residentDuration;
}

/**
* Returns true if computeFreshnessLifetime used a heuristic. If we used a heuristic to serve a
* cached response older than 24 hours, we are required to attach a warning.
*/
private boolean isFreshnessLifetimeHeuristic() {
return cacheResponse.cacheControl().maxAgeSeconds() == -1 && expires == null;
}

/**
* Returns true if the request contains conditions that save the server from sending a response
* that the client has locally. When a request is enqueued with its own conditions, the built-in
* response cache won't be used.
*/
private static boolean hasConditions(Request request) {
return request.header("If-Modified-Since") != null || request.header("If-None-Match") != null;
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
@Override public Response intercept(Chain chain) throws IOException {
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;

long now = System.currentTimeMillis();

CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;

if (cache != null) {
cache.trackResponse(strategy);
}

if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}

// If we're forbidden from using the network and the cache is insufficient, fail.
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}

// If we don't need the network, we're done.
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}

Response networkResponse = null;
try {
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}

// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();

// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}

Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();

if (cache != null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}

if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}

return response;
}

ConnectInterceptor

ConnectInterceptor 的工作很简单,就建立一条Connection与服务器交互连接,由StreamAllocation提供Healthy、没有在使用的连接。有了连接后就交给下层拦截器处理了。

1
2
3
4
5
6
7
8
9
10
11
12
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();

// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();

return realChain.proceed(request, streamAllocation, httpCodec, connection);
}

networkInterceptors

这拦截器是由用户自己定义的,如果不是forWebSocket 就逐层执行这些拦截器,在RealCall#getResponseWithInterceptorChain可以看到代码

CallServerInterceptor

呐,这是最后一个拦截器了,它与服务器做了真实的交互

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
HttpCodec httpCodec = realChain.httpStream();
StreamAllocation streamAllocation = realChain.streamAllocation();
RealConnection connection = (RealConnection) realChain.connection();
Request request = realChain.request();

long sentRequestMillis = System.currentTimeMillis();

realChain.eventListener().requestHeadersStart(realChain.call());
try {
httpCodec.writeRequestHeaders(request);
realChain.eventListener().requestHeadersEnd(realChain.call(), null);
} catch (IOException ioe) {
realChain.eventListener().requestHeadersEnd(realChain.call(), ioe);
throw ioe;
}

Response.Builder responseBuilder = null;
if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
realChain.eventListener().requestBodyStart(realChain.call());
try {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return
// what we did get (such as a 4xx response) without ever transmitting the request body.
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
httpCodec.flushRequest();
// TODO event listener
responseBuilder = httpCodec.readResponseHeaders(true);
}

if (responseBuilder == null) {
// Write the request body if the "Expect: 100-continue" expectation was met.
Sink requestBodyOut =
httpCodec.createRequestBody(request, request.body().contentLength());
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);

request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
} else if (!connection.isMultiplexed()) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
// from being reused. Otherwise we're still obligated to transmit the request body to
// leave the connection in a consistent state.
streamAllocation.noNewStreams();
}
realChain.eventListener().requestBodyEnd(realChain.call(), null);
} catch (IOException ioe) {
realChain.eventListener().requestBodyEnd(realChain.call(), ioe);
throw ioe;
}
}

httpCodec.finishRequest();

if (responseBuilder == null) {
realChain.eventListener().responseHeadersStart(realChain.call());
try {
responseBuilder = httpCodec.readResponseHeaders(false);
realChain.eventListener().responseHeadersEnd(realChain.call(), null);
} catch (IOException ioe) {
realChain.eventListener().responseHeadersEnd(realChain.call(), ioe);
throw ioe;
}
}

Response response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();

int code = response.code();
if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
response = response.newBuilder()
.body(httpCodec.openResponseBody(response))
.build();
}

if ("close".equalsIgnoreCase(response.request().header("Connection"))
|| "close".equalsIgnoreCase(response.header("Connection"))) {
streamAllocation.noNewStreams();
}

if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
throw new ProtocolException(
"HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
}

return response;
}