Optimization (performance regression fix): Use bigger buffer for server reads. Change the server read buffer limits to 16KB minimum and 256KB maximum. Used to be: 2KB and 2GB. And before r9766: 4KB and SQUID_TCP_SO_RCVBUF. Trunk r9766 (Remove limit on HTTP headers read) made the default HTTP server read buffer size 2KB instead of 4KB, visibly slowing down Squid when kernel network buffers are full and can sustain larger Squid reads. Doing up to twice as many network reads is expensive (and probably not just because of the extra system call overheads). We never grow that buffer size if the _parser_ does not need a bigger buffer: Even if the HTTP client is slower than the server, the buffer stays small because it gives all the data to Store and Store eventually just stalls reading via delayAwareRead() and read_ahead_gap. The situation may be different with RESPMOD, but if the adaptation service is fast, the buffer would still not grow. This change does not reset the minimum buffer size to the old 4KB default because memory is much cheaper compared to the days where that default was set. 8KB may have worked too, but with 12KB median typical response size a larger buffer may be a good idea for a busy Squid. More performance work is needed to find the optimal value (which could depend on the environment). This change does not set the maximum buffer size to the current 2GB limit because we have not tested how header and chunking parsers would cope with malicious messages trying to run Squid out of RAM; and also because no good parser should need that much lookahead space. Is 256KB enough for all legitimate real-world response headers? We do not know. It is tempting to use Config.tcpRcvBufsz or SQUID_TCP_SO_RCVBUF to find the right minimum or maximum buffer size, but those parameters deal with low-level TCP buffering aspects while this buffer deals with HTTP parsing. This change has been tested in production environments. === modified file 'src/http.cc' --- src/http.cc 2011-03-30 18:14:08 +0000 +++ src/http.cc 2011-03-31 16:40:31 +0000 @@ -77,41 +77,41 @@ status = false; \ } CBDATA_CLASS_INIT(HttpStateData); static const char *const crlf = "\r\n"; static void httpMaybeRemovePublic(StoreEntry *, http_status); static void copyOneHeaderFromClientsideRequestToUpstreamRequest(const HttpHeaderEntry *e, const String strConnection, HttpRequest * request, const HttpRequest * orig_request, HttpHeader * hdr_out, const int we_do_ranges, const http_state_flags); HttpStateData::HttpStateData(FwdState *theFwdState) : AsyncJob("HttpStateData"), ServerStateData(theFwdState), lastChunk(0), header_bytes_read(0), reply_bytes_read(0), body_bytes_truncated(0), httpChunkDecoder(NULL) { debugs(11,5,HERE << "HttpStateData " << this << " created"); ignoreCacheControl = false; surrogateNoStore = false; fd = fwd->server_fd; readBuf = new MemBuf; - readBuf->init(); + readBuf->init(16*1024, 256*1024); orig_request = HTTPMSGLOCK(fwd->request); // reset peer response time stats for %hier.peer_http_request_sent.tv_sec = 0; orig_request->hier.peer_http_request_sent.tv_usec = 0; if (fwd->servers) _peer = fwd->servers->_peer; /* might be NULL */ if (_peer) { const char *url; if (_peer->options.originserver) url = orig_request->urlpath.termedBuf(); else url = entry->url(); HttpRequest * proxy_req = new HttpRequest(orig_request->method, orig_request->protocol, url);