=== modified file 'doc/release-notes/release-3.5.sgml' --- doc/release-notes/release-3.5.sgml 2014-07-13 05:28:15 +0000 +++ doc/release-notes/release-3.5.sgml 2014-07-30 12:39:08 +0000 @@ -26,40 +26,41 @@ Known issues

Although this release is deemed good enough for use in many setups, please note the existence of . Changes since earlier releases of Squid-3.5

The 3.5 change history can be . Major new features since Squid-3.4

Squid 3.5 represents a new feature release above 3.4.

The most important of these new features are: Support libecap v1.0 Authentication helper query extensions Support named services Upgraded squidclient tool Helper support for concurrency channels + Receive PROXY protocol, Versions 1 & 2 Most user-facing changes are reflected in squid.conf (see below). Support libecap v1.0

Details at .

The new libecap version allows Squid to better check the version of the eCAP adapter being loaded as well as the version of the eCAP library being used.

Squid-3.5 can support eCAP adapters built with libecap v1.0, but no longer supports adapters built with earlier libecap versions due to API changes. Authentication helper query extensions

Details at . @@ -146,71 +147,111 @@ The default is to use X.509 certificate encryption instead.

When performing TLS/SSL server certificates are always verified, the results shown at debug level 3. The encrypted type is displayed at debug level 2 and the connection is used to send and receive the messages regardless of verification results. Helper support for concurrency channels

Helper concurrency greatly reduces the communication lag between Squid and its helpers allowing faster transaction speeds even on sequential helpers.

The Digest authentication, Store-ID, and URL-rewrite helpers packaged with Squid have been updated to support concurrency channels. They will auto-detect the channel-ID field and will produce the appropriate response format. With these helpers concurrency may now be set to 0 or any higher number as desired. +Receive PROXY protocol, Versions 1 & 2 +

More info at + +

PROXY protocol provides a simple way for proxies and tunnels of any kind to + relay the original client source details without having to alter or understand + the protocol being relayed on the connection. + +

Squid currently supports receiving HTTP traffic from a client proxy using this protocol. + An http_port which has been configured to receive this protocol may only be used to + receive traffic from client software sending in this protocol. + HTTP traffic without the PROXY header is not accepted on such a port. + +

The accel and intercept options are still used to identify the + traffic syntax being delivered by the client proxy. + +

Squid can be configured by adding an http_port + with the proxy-surrogate mode flag. The proxy_forwarded_access + must also be configured with src ACLs to whitelist proxies which are + trusted to send correct client details. + +

Forward-proxy traffic from a client proxy: + + http_port 3128 proxy-surrogate + proxy_forwarded_access allow localhost + + +

Intercepted traffic from a client proxy or tunnel: + + http_port 3128 intercept proxy-surrogate + proxy_forwarded_access allow localhost + + +

Known Issue: + Use of proxy-surrogate on https_port is not supported. + + Changes to squid.conf since Squid-3.4

There have been changes to Squid's configuration file since Squid-3.4.

Squid supports reading configuration option parameters from external files using the syntax parameters("/path/filename"). For example: acl whitelist dstdomain parameters("/etc/squid/whitelist.txt")

The squid.conf macro ${service_name} is added to provide the service name of the process parsing the config.

There have also been changes to individual directives in the config file. This section gives a thorough account of those changes in three categories:

New tags

collapsed_forwarding

Ported from Squid-2 with no configuration or visible behaviour changes. Collapsing of requests is performed across SMP workers. + proxy_forwarded_access +

Renamed from follow_x_forwarded_for and extended to control more + ways for locating the indirect (original) client IP details. + send_hit

New configuration directive to enable/disable sending cached content based on ACL selection. ACL can be based on client request or cached response details. sslproxy_session_cache_size

New directive which sets the cache size to use for TLS/SSL sessions cache. sslproxy_session_ttl

New directive to specify the time in seconds the TLS/SSL session is valid. store_id_extras

New directive to send additional lookup parameters to the configured Store-ID helper program. It takes a string which may contain logformat %macros.

The Store-ID helper input format is now: [channel-ID] url [extras]

The default value for extras is: "%>a/%>A %un %>rm myip=%la myport=%lp" @@ -259,75 +300,80 @@

These connections differ from HTTP persistent connections in that they have not been used for HTTP messaging (and may never be). They may be turned into persistent connections after their first use subject to the same keep-alive critera any HTTP connection is checked for. forward_max_tries

Default value increased to 25 destinations to allow better contact and IPv4 failover with domains using long lists of IPv6 addresses. ftp_epsv

Converted into an Access List with allow/deny value driven by ACLs using Squid standard first line wins matching basis.

The old values of on and off imply allow all and deny all respectively and are now deprecated. Do not combine use of on/off values with ACL configuration. http_port

protocol= option altered to accept protocol version details. Currently supported values are: HTTP, HTTP/1.1, HTTPS, HTTPS/1.1 +

New option proxy-surrogate to mark ports receiving PROXY + protocol version 1 or 2 traffic. https_port

protocol= option altered to accept protocol version details. Currently supported values are: HTTP, HTTP/1.1, HTTPS, HTTPS/1.1 logformat

New format code %credentials to log the client credentials token.

New format code %tS to log transaction start time in "seconds.milliseconds" format, similar to the existing access.log "current time" field (%ts.%03tu) which logs the corresponding transaction finish time. Removed tags

cache_dir

COSS storage type is formally replaced by Rock storage type. cache_dns_program

DNS external helper interface has been removed. It was no longer able to provide high performance service and the internal DNS client library with multicast DNS cover all modern use-cases. cache_peer

idle= replaced by standby=.

NOTE that standby connections are started earlier and available in more circumstances than squid-2 idle connections were. They are also spread over all IPs of the peer. dns_children

DNS external helper interface has been removed. + follow_x_forwarded_for +

Renamed proxy_forwarded_access and extended. + Changes to ./configure options since Squid-3.4

There have been some changes to Squid's build configuration since Squid-3.4. This section gives an account of those changes in three categories: New options

BUILDCXX= === modified file 'doc/rfc/1-index.txt' --- doc/rfc/1-index.txt 2014-06-09 01:38:06 +0000 +++ doc/rfc/1-index.txt 2014-07-25 09:18:15 +0000 @@ -1,40 +1,43 @@ draft-ietf-radext-digest-auth-06.txt RADIUS Extension for Digest Authentication A proposed extension to Radius for Digest authentication via RADIUS servers. draft-cooper-webi-wpad-00.txt draft-ietf-svrloc-wpad-template-00.txt Web Proxy Auto-Discovery Protocol -- WPAD documents how MSIE and several other browsers automatically find their proxy settings from DHCP and/or DNS draft-forster-wrec-wccp-v1-00.txt WCCP 1.0 draft-wilson-wccp-v2-12-oct-2001.txt WCCP 2.0 draft-vinod-carp-v1-03.txt Microsoft CARP peering algorithm +proxy-protocol.txt + The PROXY protocol, Versions 1 & 2 + rfc0959.txt FTP rfc1035.txt DNS for IPv4 rfc1157.txt A Simple Network Management Protocol (SNMP) SNMP v1 Specification. SNMP v2 is documented in several RFCs, namely, 1902,1903,1904,1905,1906,1907. rfc1738.txt Uniform Resource Locators (URL) (updated by RFC 3986, but not obsoleted) rfc1902.txt Structure of Managament Information (SMI) for SNMPv2 Management information is viewed as a collection of managed objects, the Management Information Base (MIB). MIB modules are written using an adapted subset of OSI's Abstract Syntax === modified file 'src/Makefile.am' --- src/Makefile.am 2014-07-23 12:51:55 +0000 +++ src/Makefile.am 2014-07-25 12:05:51 +0000 @@ -1609,40 +1609,41 @@ acl/libapi.la \ base/libbase.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ comm/libcomm.la \ eui/libeui.la \ icmp/libicmp.la icmp/libicmp-core.la \ log/liblog.la \ format/libformat.la \ $(REPL_OBJS) \ $(DISK_LIBS) \ $(DISK_OS_LIBS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ $(SSL_LIBS) \ anyp/libanyp.la \ ipc/libipc.la \ mgr/libmgr.la \ $(SNMP_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(NETTLELIB) \ $(REGEXLIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_testCacheManager_LDFLAGS = $(LIBADD_DL) tests_testCacheManager_DEPENDENCIES = \ $(REPL_OBJS) \ $(SQUID_CPPUNIT_LA) tests_testDiskIO_SOURCES = \ CacheDigest.h \ tests/stub_CacheDigest.cc \ cbdata.cc \ @@ -2037,40 +2038,41 @@ $(DISKIO_GEN_SOURCE) tests_testEvent_LDADD = \ http/libsquid-http.la \ ident/libident.la \ acl/libacls.la \ acl/libstate.la \ acl/libapi.la \ base/libbase.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ anyp/libanyp.la \ icmp/libicmp.la icmp/libicmp-core.la \ comm/libcomm.la \ log/liblog.la \ format/libformat.la \ $(REPL_OBJS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ $(SSL_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(DISK_LIBS) \ $(DISK_OS_LIBS) \ ipc/libipc.la \ mgr/libmgr.la \ $(SNMP_LIBS) \ $(NETTLELIB) \ $(REGEXLIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_testEvent_LDFLAGS = $(LIBADD_DL) tests_testEvent_DEPENDENCIES = \ $(REPL_OBJS) \ $(SQUID_CPPUNIT_LA) @@ -2287,40 +2289,41 @@ $(DISKIO_GEN_SOURCE) tests_testEventLoop_LDADD = \ http/libsquid-http.la \ ident/libident.la \ acl/libacls.la \ acl/libstate.la \ acl/libapi.la \ base/libbase.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ anyp/libanyp.la \ icmp/libicmp.la icmp/libicmp-core.la \ comm/libcomm.la \ log/liblog.la \ format/libformat.la \ $(REPL_OBJS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ $(SSL_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(DISK_LIBS) \ $(DISK_OS_LIBS) \ ipc/libipc.la \ mgr/libmgr.la \ $(SNMP_LIBS) \ $(NETTLELIB) \ $(REGEXLIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_testEventLoop_LDFLAGS = $(LIBADD_DL) tests_testEventLoop_DEPENDENCIES = \ $(REPL_OBJS) \ $(SQUID_CPPUNIT_LA) @@ -2535,40 +2538,41 @@ acl/libstate.la \ acl/libapi.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ anyp/libanyp.la \ icmp/libicmp.la icmp/libicmp-core.la \ comm/libcomm.la \ log/liblog.la \ format/libformat.la \ $(REPL_OBJS) \ $(DISK_LIBS) \ $(DISK_OS_LIBS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ $(SSL_LIBS) \ ipc/libipc.la \ base/libbase.la \ mgr/libmgr.la \ $(SNMP_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(NETTLELIB) \ $(REGEXLIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_test_http_range_LDFLAGS = $(LIBADD_DL) tests_test_http_range_DEPENDENCIES = \ $(SQUID_CPPUNIT_LA) tests_testHttpParser_SOURCES = \ Debug.h \ HttpParser.cc \ HttpParser.h \ MemBuf.cc \ @@ -2825,40 +2829,41 @@ acl/libacls.la \ acl/libstate.la \ acl/libapi.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ $(SSL_LIBS) \ ipc/libipc.la \ base/libbase.la \ mgr/libmgr.la \ anyp/libanyp.la \ $(SNMP_LIBS) \ icmp/libicmp.la icmp/libicmp-core.la \ comm/libcomm.la \ log/liblog.la \ format/libformat.la \ http/libsquid-http.la \ $(REPL_OBJS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(DISK_OS_LIBS) \ $(NETTLELIB) \ $(REGEXLIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_testHttpRequest_LDFLAGS = $(LIBADD_DL) tests_testHttpRequest_DEPENDENCIES = \ $(REPL_OBJS) \ $(SQUID_CPPUNIT_LA) ## why so many sources? well httpHeaderTools requites ACLChecklist & friends. ## first line - what we are testing. tests_testStore_SOURCES= \ @@ -3669,40 +3674,41 @@ eui/libeui.la \ acl/libstate.la \ acl/libapi.la \ base/libbase.la \ libsquid.la \ ip/libip.la \ fs/libfs.la \ $(SSL_LIBS) \ ipc/libipc.la \ mgr/libmgr.la \ $(SNMP_LIBS) \ icmp/libicmp.la icmp/libicmp-core.la \ comm/libcomm.la \ log/liblog.la \ $(DISK_OS_LIBS) \ format/libformat.la \ $(REGEXLIB) \ $(REPL_OBJS) \ $(ADAPTATION_LIBS) \ $(ESI_LIBS) \ + parser/libsquid-parser.la \ $(top_builddir)/lib/libmisccontainers.la \ $(top_builddir)/lib/libmiscencoding.la \ $(top_builddir)/lib/libmiscutil.la \ $(NETTLELIB) \ $(COMPAT_LIB) \ $(SQUID_CPPUNIT_LIBS) \ $(SQUID_CPPUNIT_LA) \ $(SSLLIB) \ $(KRB5LIBS) \ $(COMPAT_LIB) \ $(XTRA_LIBS) tests_testURL_LDFLAGS = $(LIBADD_DL) tests_testURL_DEPENDENCIES = \ $(REPL_OBJS) \ $(SQUID_CPPUNIT_LA) tests_testSBuf_SOURCES= \ tests/testSBuf.h \ tests/testSBuf.cc \ tests/testMain.cc \ === modified file 'src/anyp/TrafficMode.h' --- src/anyp/TrafficMode.h 2013-02-04 09:47:50 +0000 +++ src/anyp/TrafficMode.h 2014-07-25 06:12:42 +0000 @@ -8,40 +8,50 @@ * Set of 'mode' flags defining types of trafic which can be received. * * Use to determine the processing steps which need to be applied * to this traffic under any special circumstances which may apply. */ class TrafficMode { public: TrafficMode() : accelSurrogate(false), natIntercept(false), tproxyIntercept(false), tunnelSslBumping(false) {} TrafficMode(const TrafficMode &rhs) { operator =(rhs); } TrafficMode &operator =(const TrafficMode &rhs) { memcpy(this, &rhs, sizeof(TrafficMode)); return *this; } /** marks HTTP accelerator (reverse/surrogate proxy) traffic * * Indicating the following are required: * - URL translation from relative to absolute form * - restriction to origin peer relay recommended */ bool accelSurrogate; + /** marks ports receiving PROXY protocol traffic + * + * Indicating the following are required: + * - PROXY protocol magic header + * - src/dst IP retrieved from magic PROXY header + * - indirect client IP trust verification is mandatory + * - TLS is not supported + */ + bool proxySurrogate; + /** marks NAT intercepted traffic * * Indicating the following are required: * - NAT lookups * - URL translation from relative to absolute form * - Same-Origin verification is mandatory * - destination pinning is recommended * - authentication prohibited */ bool natIntercept; /** marks TPROXY intercepted traffic * * Indicating the following are required: * - src/dst IP inversion must be performed * - client IP should be spoofed if possible * - URL translation from relative to absolute form * - Same-Origin verification is mandatory * - destination pinning is recommended * - authentication prohibited === modified file 'src/cache_cf.cc' --- src/cache_cf.cc 2014-07-21 14:55:27 +0000 +++ src/cache_cf.cc 2014-07-30 12:42:49 +0000 @@ -3581,45 +3581,53 @@ } else if (strcmp(token, "transparent") == 0 || strcmp(token, "intercept") == 0) { if (s->flags.accelSurrogate || s->flags.tproxyIntercept) { debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: Intercept mode requires its own interception port. It cannot be shared with other modes."); self_destruct(); } s->flags.natIntercept = true; Ip::Interceptor.StartInterception(); /* Log information regarding the port modes under interception. */ debugs(3, DBG_IMPORTANT, "Starting Authentication on port " << s->s); debugs(3, DBG_IMPORTANT, "Disabling Authentication on port " << s->s << " (interception enabled)"); } else if (strcmp(token, "tproxy") == 0) { if (s->flags.natIntercept || s->flags.accelSurrogate) { debugs(3,DBG_CRITICAL, "FATAL: http(s)_port: TPROXY option requires its own interception port. It cannot be shared with other modes."); self_destruct(); } s->flags.tproxyIntercept = true; Ip::Interceptor.StartTransparency(); /* Log information regarding the port modes under transparency. */ debugs(3, DBG_IMPORTANT, "Disabling Authentication on port " << s->s << " (TPROXY enabled)"); + if (s->flags.proxySurrogate) { + debugs(3, DBG_IMPORTANT, "Disabling TPROXY Spoofing on port " << s->s << " (proxy-surrogate enabled)"); + } + if (!Ip::Interceptor.ProbeForTproxy(s->s)) { debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: TPROXY support in the system does not work."); self_destruct(); } + } else if (strcmp(token, "proxy-surrogate") == 0) { + s->flags.proxySurrogate = true; + debugs(3, DBG_IMPORTANT, "Disabling TPROXY Spoofing on port " << s->s << " (proxy-surrogate enabled)"); + } else if (strncmp(token, "defaultsite=", 12) == 0) { if (!s->flags.accelSurrogate) { debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: defaultsite option requires Acceleration mode flag."); self_destruct(); } safe_free(s->defaultsite); s->defaultsite = xstrdup(token + 12); } else if (strcmp(token, "vhost") == 0) { if (!s->flags.accelSurrogate) { debugs(3, DBG_CRITICAL, "WARNING: http(s)_port: vhost option is deprecated. Use 'accel' mode flag instead."); } s->flags.accelSurrogate = true; s->vhost = true; } else if (strcmp(token, "no-vhost") == 0) { if (!s->flags.accelSurrogate) { debugs(3, DBG_IMPORTANT, "ERROR: http(s)_port: no-vhost option requires Acceleration mode flag."); } s->vhost = false; } else if (strcmp(token, "vport") == 0) { if (!s->flags.accelSurrogate) { @@ -3783,84 +3791,91 @@ self_destruct(); return; } char *token = ConfigParser::NextToken(); if (!token) { self_destruct(); return; } AnyP::PortCfgPointer s = new AnyP::PortCfg(); s->setTransport(protocol); parsePortSpecification(s, token); /* parse options ... */ while ((token = ConfigParser::NextToken())) { parse_port_option(s, token); } -#if USE_OPENSSL if (s->transport.protocol == AnyP::PROTO_HTTPS) { +#if USE_OPENSSL /* ssl-bump on https_port configuration requires either tproxy or intercept, and vice versa */ const bool hijacked = s->flags.isIntercepted(); if (s->flags.tunnelSslBumping && !hijacked) { debugs(3, DBG_CRITICAL, "FATAL: ssl-bump on https_port requires tproxy/intercept which is missing."); self_destruct(); } if (hijacked && !s->flags.tunnelSslBumping) { debugs(3, DBG_CRITICAL, "FATAL: tproxy/intercept on https_port requires ssl-bump which is missing."); self_destruct(); } - } #endif + if (s->transport.protocol == AnyP::PROTO_HTTPS) { + debugs(3,DBG_CRITICAL, "FATAL: https_port: proxy-surrogate option is not supported on HTTPS ports."); + self_destruct(); + } + } if (Ip::EnableIpv6&IPV6_SPECIAL_SPLITSTACK && s->s.isAnyAddr()) { // clone the port options from *s to *(s->next) s->next = s->clone(); s->next->s.setIPv4(); debugs(3, 3, AnyP::UriScheme(s->transport.protocol).c_str() << "_port: clone wildcard address for split-stack: " << s->s << " and " << s->next->s); } while (*head != NULL) head = &((*head)->next); *head = s; } static void dump_generic_port(StoreEntry * e, const char *n, const AnyP::PortCfgPointer &s) { char buf[MAX_IPSTRLEN]; storeAppendPrintf(e, "%s %s", n, s->s.toUrl(buf,MAX_IPSTRLEN)); // MODES and specific sub-options. if (s->flags.natIntercept) storeAppendPrintf(e, " intercept"); else if (s->flags.tproxyIntercept) storeAppendPrintf(e, " tproxy"); + else if (s->flags.proxySurrogate) + storeAppendPrintf(e, " proxy-surrogate"); + else if (s->flags.accelSurrogate) { storeAppendPrintf(e, " accel"); if (s->vhost) storeAppendPrintf(e, " vhost"); if (s->vport < 0) storeAppendPrintf(e, " vport"); else if (s->vport > 0) storeAppendPrintf(e, " vport=%d", s->vport); if (s->defaultsite) storeAppendPrintf(e, " defaultsite=%s", s->defaultsite); // TODO: compare against prefix of 'n' instead of assuming http_port if (s->transport.protocol != AnyP::PROTO_HTTP) storeAppendPrintf(e, " protocol=%s", AnyP::UriScheme(s->transport.protocol).c_str()); if (s->allow_direct) storeAppendPrintf(e, " allow-direct"); === modified file 'src/cf.data.pre' --- src/cf.data.pre 2014-07-21 14:55:27 +0000 +++ src/cf.data.pre 2014-07-30 13:46:13 +0000 @@ -1077,79 +1077,91 @@ acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT NOCOMMENT_END DOC_END -NAME: follow_x_forwarded_for +NAME: proxy_forwarded_access follow_x_forwarded_for TYPE: acl_access -IFDEF: FOLLOW_X_FORWARDED_FOR LOC: Config.accessList.followXFF DEFAULT_IF_NONE: deny all -DEFAULT_DOC: X-Forwarded-For header will be ignored. +DEFAULT_DOC: indirect client IP will not be accepted. DOC_START - Allowing or Denying the X-Forwarded-For header to be followed to - find the original source of a request. + Determine which client proxies can be trusted to provide correct + information regarding real client IP address. Requests may pass through a chain of several other proxies - before reaching us. The X-Forwarded-For header will contain a - comma-separated list of the IP addresses in the chain, with the - rightmost address being the most recent. + before reaching us. The original source details may by sent in: + * HTTP message Forwarded header, or + * HTTP message X-Forwarded-For header, or + * PROXY protocol connection header. If a request reaches us from a source that is allowed by this - configuration item, then we consult the X-Forwarded-For header - to see where that host received the request from. If the - X-Forwarded-For header contains multiple addresses, we continue - backtracking until we reach an address for which we are not allowed - to follow the X-Forwarded-For header, or until we reach the first - address in the list. For the purpose of ACL used in the - follow_x_forwarded_for directive the src ACL type always matches - the address we are testing and srcdomain matches its rDNS. + directive, then we trust the information it provides regarding + the IP of the client it received from (if any). + + For the purpose of ACLs used in this directive the src ACL type always + matches the address we are testing and srcdomain matches its rDNS. + + For proxy-surrogate ports an allow match is required for Squid to + permit the corresponding TCP connection, before Squid even looks for + HTTP request headers. If there is an allow match, Squid starts using + PROXY header information to determine the source address of the + connection for all future ACL checks. A deny match results in TCP + connection closure. Evaluation described in this paragraph does not + happen on non proxy-surrogate ports. + + On each HTTP request Squid checks for X-Forwarded-For header fields. + If found the header values are iterated in reverse order and an allow + match is required for Squid to continue on to the next value. + The verification ends when a value receives a deny match, cannot be + tested, or there are no more values to test. + NOTE: Squid does not yet follow the Forwarded HTTP header. The end result of this process is an IP address that we will refer to as the indirect client address. This address may be treated as the client address for access control, ICAP, delay pools and logging, depending on the acl_uses_indirect_client, icap_uses_indirect_client, delay_pool_uses_indirect_client, log_uses_indirect_client and tproxy_uses_indirect_client options. This clause only supports fast acl types. See http://wiki.squid-cache.org/SquidFaq/SquidAcl for details. SECURITY CONSIDERATIONS: - Any host for which we follow the X-Forwarded-For header - can place incorrect information in the header, and Squid + Any host for which we accept client IP details can place + incorrect information in the relevant header, and Squid will use the incorrect information as if it were the source address of the request. This may enable remote hosts to bypass any access control restrictions that are based on the client's source addresses. For example: acl localhost src 127.0.0.1 acl my_other_proxy srcdomain .proxy.example.com follow_x_forwarded_for allow localhost follow_x_forwarded_for allow my_other_proxy DOC_END NAME: acl_uses_indirect_client COMMENT: on|off TYPE: onoff IFDEF: FOLLOW_X_FORWARDED_FOR DEFAULT: on LOC: Config.onoff.acl_uses_indirect_client DOC_START @@ -1704,40 +1716,45 @@ always disable always PMTU discovery. In many setups of transparently intercepting proxies Path-MTU discovery can not work on traffic towards the clients. This is the case when the intercepting device does not fully track connections and fails to forward ICMP must fragment messages to the cache server. If you have such setup and experience that certain clients sporadically hang or never complete requests set disable-pmtu-discovery option to 'transparent'. name= Specifies a internal name for the port. Defaults to the port specification (port or addr:port) tcpkeepalive[=idle,interval,timeout] Enable TCP keepalive probes of idle connections. In seconds; idle is the initial time before TCP starts probing the connection, interval how often to probe, and timeout the time before giving up. + proxy-surrogate + Require PROXY protocol version 1 or 2 connections. + The proxy_forwarded_access is required to whitelist + downstream proxies which can be trusted. + If you run Squid on a dual-homed machine with an internal and an external interface we recommend you to specify the internal address:port in http_port. This way Squid will only be visible on the internal address. NOCOMMENT_START # Squid normally listens to port 3128 http_port @DEFAULT_HTTP_PORT@ NOCOMMENT_END DOC_END NAME: https_port IFDEF: USE_OPENSSL TYPE: PortCfg DEFAULT: none LOC: HttpsPortList DOC_START Usage: [ip:]port cert=certificate.pem [key=key.pem] [mode] [options...] === modified file 'src/client_side.cc' --- src/client_side.cc 2014-07-16 12:10:11 +0000 +++ src/client_side.cc 2014-07-30 13:42:47 +0000 @@ -102,40 +102,41 @@ #include "fd.h" #include "fde.h" #include "fqdncache.h" #include "FwdState.h" #include "globals.h" #include "http.h" #include "HttpHdrContRange.h" #include "HttpHeaderTools.h" #include "HttpReply.h" #include "HttpRequest.h" #include "ident/Config.h" #include "ident/Ident.h" #include "internal.h" #include "ipc/FdNotes.h" #include "ipc/StartListening.h" #include "log/access_log.h" #include "Mem.h" #include "MemBuf.h" #include "MemObject.h" #include "mime_header.h" +#include "parser/Tokenizer.h" #include "profiler/Profiler.h" #include "rfc1738.h" #include "SquidConfig.h" #include "SquidTime.h" #include "StatCounters.h" #include "StatHist.h" #include "Store.h" #include "TimeOrTag.h" #include "tools.h" #include "URL.h" #if USE_AUTH #include "auth/UserRequest.h" #endif #if USE_DELAY_POOLS #include "ClientInfo.h" #endif #if USE_OPENSSL #include "ssl/context_storage.h" #include "ssl/gadgets.h" @@ -2322,40 +2323,42 @@ #if THIS_VIOLATES_HTTP_SPECS_ON_URL_TRANSFORMATION if ((t = strchr(url, '#'))) /* remove HTML anchors */ *t = '\0'; #endif debugs(33,5, HERE << "repare absolute URL from " << (csd->transparent()?"intercept":(csd->port->flags.accelSurrogate ? "accel":""))); /* Rewrite the URL in transparent or accelerator mode */ /* NP: there are several cases to traverse here: * - standard mode (forward proxy) * - transparent mode (TPROXY) * - transparent mode with failures * - intercept mode (NAT) * - intercept mode with failures * - accelerator mode (reverse proxy) * - internal URL * - mixed combos of the above with internal URL + * - remote interception with PROXY protocol + * - remote reverse-proxy with PROXY protocol */ if (csd->transparent()) { /* intercept or transparent mode, properly working with no failures */ prepareTransparentURL(csd, http, url, req_hdr); } else if (internalCheck(url)) { /* internal URL mode */ /* prepend our name & port */ http->uri = xstrdup(internalLocalUri(NULL, url)); // We just re-wrote the URL. Must replace the Host: header. // But have not parsed there yet!! flag for local-only handling. http->flags.internal = true; } else if (csd->port->flags.accelSurrogate || csd->switchedToHttps()) { /* accelerator mode */ prepareAcceleratedURL(csd, http, url, req_hdr); } if (!http->uri) { /* No special rewrites have been applied above, use the @@ -2885,67 +2888,320 @@ bool ConnStateData::concurrentRequestQueueFilled() const { const int existingRequestCount = getConcurrentRequestCount(); // default to the configured pipeline size. // add 1 because the head of pipeline is counted in concurrent requests and not prefetch queue const int concurrentRequestLimit = Config.pipeline_max_prefetch + 1; // when queue filled already we cant add more. if (existingRequestCount >= concurrentRequestLimit) { debugs(33, 3, clientConnection << " max concurrent requests reached (" << concurrentRequestLimit << ")"); debugs(33, 5, clientConnection << " deferring new request until one is done"); return true; } return false; } /** + * Perform forwarded_access ACL tests on the client which + * connected to PROXY protocol port to see if we trust the + * sender enough to accept their PROXY header claim. + */ +bool +ConnStateData::proxyProtocolValidateClient() +{ + ACLFilledChecklist ch(Config.accessList.followXFF, NULL, clientConnection->rfc931); + ch.src_addr = clientConnection->remote; + ch.my_addr = clientConnection->local; + ch.conn(this); + + if (ch.fastCheck() != ACCESS_ALLOWED) + return proxyProtocolError("PROXY client not permitted by ACLs"); + + return true; +} + +/** + * Perform cleanup on PROXY protocol errors. + * If header parsing hits a fatal error terminate the connection, + * otherwise wait for more data. + */ +bool +ConnStateData::proxyProtocolError(const char *msg) +{ + if (msg) { + // This is important to know, but maybe not so much that flooding the log is okay. +#if QUIET_PROXY_PROTOCOL + // display the first of every 32 occurances at level 1, the others at level 2. + static uint8_t hide = 0; + debugs(33, (hide++ % 32 == 0 ? DBG_IMPORTANT : 2), msg << " from " << clientConnection); +#else + debugs(33, DBG_IMPORTANT, msg << " from " << clientConnection); +#endif + mustStop(msg); + } + return false; +} + +/// magic octet prefix for PROXY protocol version 1 +static const SBuf Proxy1p0magic("PROXY ", 6); + +/// magic octet prefix for PROXY protocol version 2 +static const SBuf Proxy2p0magic("\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A", 12); + +/** + * Test the connection read buffer for PROXY protocol header. + * Version 1 and 2 header currently supported. + */ +bool +ConnStateData::parseProxyProtocolHeader() +{ + // http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt + + // detect and parse PROXY/2.0 protocol header + if (in.buf.startsWith(Proxy2p0magic)) + return parseProxy2p0(); + + // detect and parse PROXY/1.0 protocol header + if (in.buf.startsWith(Proxy1p0magic)) + return parseProxy1p0(); + + // detect and terminate other protocols + if (in.buf.length() >= Proxy2p0magic.length()) { + // PROXY/1.0 magic is shorter, so we know that + // the input does not start with any PROXY magic + return proxyProtocolError("PROXY protocol error: invalid header"); + } + + // TODO: detect short non-magic prefixes earlier to avoid + // waiting for more data which may never come + + // not enough bytes to parse yet. + return false; +} + +/// parse the PROXY/1.0 protocol header from the connection read buffer +bool +ConnStateData::parseProxy1p0() +{ + ::Parser::Tokenizer tok(in.buf); + tok.skip(Proxy1p0magic); + + SBuf tcpVersion; + if (!tok.prefix(tcpVersion, CharacterSet::ALPHA+CharacterSet::DIGIT)) + return proxyProtocolError(tok.atEnd()?"PROXY/1.0 error: invalid protocol family":NULL); + + if (!tcpVersion.cmp("UNKNOWN")) { + // skip to first LF (assumes it is part of CRLF) + const SBuf::size_type pos = in.buf.findFirstOf(CharacterSet::LF); + if (pos != SBuf::npos) { + if (in.buf[pos-1] != '\r') + return proxyProtocolError("PROXY/1.0 error: missing CR"); + // found valid but unusable header + in.buf.consume(pos); + needProxyProtocolHeader_ = false; + return true; + } + // else, no LF found + + // protocol error only if there are more than 107 bytes prefix header + return proxyProtocolError(in.buf.length() > 107? "PROXY/1.0 error: missing CRLF":NULL); + + } else if (!tcpVersion.cmp("TCP",3)) { + + // skip SP after protocol version + if (!tok.skip(' ')) + return proxyProtocolError(tok.atEnd()?"PROXY/1.0 error: missing SP":NULL); + + SBuf ipa, ipb; + int64_t porta, portb; + const CharacterSet ipChars = CharacterSet("IP Address",".:") + CharacterSet::HEXDIG; + + // parse src-IP SP dst-IP SP src-port SP dst-port CRLF + if (!tok.prefix(ipa, ipChars) || !tok.skip(' ') || + !tok.prefix(ipb, ipChars) || !tok.skip(' ') || + !tok.int64(porta) || !tok.skip(' ') || + !tok.int64(portb) || !tok.skip('\r') || !tok.skip('\n')) + return proxyProtocolError(!tok.atEnd()?"PROXY/1.0 error: invalid syntax":NULL); + + in.buf = tok.remaining(); // sync buffers + needProxyProtocolHeader_ = false; // found successfully + + // parse IP and port strings + Ip::Address originalClient, originalDest; + + if (!originalClient.GetHostByName(ipa.c_str())) + return proxyProtocolError("PROXY/1.0 error: invalid src-IP address"); + + if (!originalDest.GetHostByName(ipb.c_str())) + return proxyProtocolError("PROXY/1.0 error: invalid dst-IP address"); + + if (porta > 0 && porta <= 0xFFFF) // max uint16_t + originalClient.port(static_cast(porta)); + else + return proxyProtocolError("PROXY/1.0 error: invalid src port"); + + if (portb > 0 && portb <= 0xFFFF) // max uint16_t + originalDest.port(static_cast(portb)); + else + return proxyProtocolError("PROXY/1.0 error: invalid dst port"); + + // we have original client and destination details now + // replace the client connection values + debugs(33, 5, "PROXY/1.0 protocol on connection " << clientConnection); + clientConnection->local = originalDest; + clientConnection->remote = originalClient; + clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY spoofing of this new IP. + debugs(33, 5, "PROXY/1.0 upgrade: " << clientConnection); + + // repeat fetch ensuring the new client FQDN can be logged + if (Config.onoff.log_fqdn) + fqdncache_gethostbyaddr(clientConnection->remote, FQDN_LOOKUP_IF_MISS); + + return true; + } + + return false; +} + +/// parse the PROXY/2.0 protocol header from the connection read buffer +bool +ConnStateData::parseProxy2p0() +{ + if ((in.buf[0] & 0xF0) != 0x20) // version == 2 is mandatory + return proxyProtocolError("PROXY/2.0 error: invalid version"); + + const char command = (in.buf[0] & 0x0F); + if ((command & 0xFE) != 0x00) // values other than 0x0-0x1 are invalid + return proxyProtocolError("PROXY/2.0 error: invalid command"); + + const char family = (in.buf[1] & 0xF0) >>4; + if (family > 0x3) // values other than 0x0-0x3 are invalid + return proxyProtocolError("PROXY/2.0 error: invalid family"); + + const char proto = (in.buf[1] & 0x0F); + if (proto > 0x2) // values other than 0x0-0x2 are invalid + return proxyProtocolError("PROXY/2.0 error: invalid protocol type"); + + const char *clen = in.buf.rawContent() + Proxy2p0magic.length() + 2; + const uint16_t len = ntohs(*(reinterpret_cast(clen))); + + if (in.buf.length() < Proxy2p0magic.length() + 4 + len) + return false; // need more bytes + + in.buf.consume(Proxy2p0magic.length() + 4); // 4 being the extra bytes + const SBuf extra = in.buf.consume(len); + needProxyProtocolHeader_ = false; // found successfully + + // LOCAL connections do nothing with the extras + if (command == 0x00/* LOCAL*/) + return true; + + typedef union proxy_addr { + struct { /* for TCP/UDP over IPv4, len = 12 */ + struct in_addr src_addr; + struct in_addr dst_addr; + uint16_t src_port; + uint16_t dst_port; + } ipv4_addr; + struct { /* for TCP/UDP over IPv6, len = 36 */ + struct in6_addr src_addr; + struct in6_addr dst_addr; + uint16_t src_port; + uint16_t dst_port; + } ipv6_addr; +#if NOT_SUPPORTED + struct { /* for AF_UNIX sockets, len = 216 */ + uint8_t src_addr[108]; + uint8_t dst_addr[108]; + } unix_addr; +#endif + } pax; + + const pax *ipu = reinterpret_cast(extra.rawContent()); + + // replace the client connection values + debugs(33, 5, "PROXY/2.0 protocol on connection " << clientConnection); + switch (family) + { + case 0x1: // IPv4 + clientConnection->local = ipu->ipv4_addr.dst_addr; + clientConnection->local.port(ntohs(ipu->ipv4_addr.dst_port)); + clientConnection->remote = ipu->ipv4_addr.src_addr; + clientConnection->remote.port(ntohs(ipu->ipv4_addr.src_port)); + clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY spoofing of this new IP. + break; + case 0x2: // IPv6 + clientConnection->local = ipu->ipv6_addr.dst_addr; + clientConnection->local.port(ntohs(ipu->ipv6_addr.dst_port)); + clientConnection->remote = ipu->ipv6_addr.src_addr; + clientConnection->remote.port(ntohs(ipu->ipv6_addr.src_port)); + clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY spoofing of this new IP. + break; + default: // do nothing + break; + } + debugs(33, 5, "PROXY/2.0 upgrade: " << clientConnection); + + // repeat fetch ensuring the new client FQDN can be logged + if (Config.onoff.log_fqdn) + fqdncache_gethostbyaddr(clientConnection->remote, FQDN_LOOKUP_IF_MISS); + + return true; +} + +/** * Attempt to parse one or more requests from the input buffer. * If a request is successfully parsed, even if the next request * is only partially parsed, it will return TRUE. */ bool ConnStateData::clientParseRequests() { HttpRequestMethod method; bool parsed_req = false; debugs(33, 5, HERE << clientConnection << ": attempting to parse"); // Loop while we have read bytes that are not needed for producing the body // On errors, bodyPipe may become nil, but readMore will be cleared while (!in.buf.isEmpty() && !bodyPipe && flags.readMore) { connStripBufferWhitespace(this); /* Don't try to parse if the buffer is empty */ if (in.buf.isEmpty()) break; /* Limit the number of concurrent requests */ if (concurrentRequestQueueFilled()) break; /* Begin the parsing */ PROF_start(parseHttpRequest); + + // try to parse the PROXY protocol header magic bytes + if (needProxyProtocolHeader_ && !parseProxyProtocolHeader()) + break; + HttpParserInit(&parser_, in.buf.c_str(), in.buf.length()); /* Process request */ Http::ProtocolVersion http_ver; ClientSocketContext *context = parseHttpRequest(this, &parser_, &method, &http_ver); PROF_stop(parseHttpRequest); /* partial or incomplete request */ if (!context) { // TODO: why parseHttpRequest can just return parseHttpRequestAbort // (which becomes context) but checkHeaderLimits cannot? checkHeaderLimits(); break; } /* status -1 or 1 */ if (context) { debugs(33, 5, HERE << clientConnection << ": parsed a request"); AsyncCall::Pointer timeoutCall = commCbCall(5, 4, "clientLifetimeTimeout", CommTimeoutCbPtrFun(clientLifetimeTimeout, context->http)); @@ -3263,114 +3519,130 @@ sslBumpMode(Ssl::bumpEnd), switchedToHttps_(false), sslServerBump(NULL), #endif stoppedSending_(NULL), stoppedReceiving_(NULL) { pinning.host = NULL; pinning.port = -1; pinning.pinned = false; pinning.auth = false; pinning.zeroReply = false; pinning.peer = NULL; // store the details required for creating more MasterXaction objects as new requests come in clientConnection = xact->tcpClient; port = xact->squidPort; log_addr = xact->tcpClient->remote; log_addr.applyMask(Config.Addrs.client_netmask); - // ensure a buffer is present for this connection - in.maybeMakeSpaceAvailable(); - if (port->disable_pmtu_discovery != DISABLE_PMTU_OFF && (transparent() || port->disable_pmtu_discovery == DISABLE_PMTU_ALWAYS)) { #if defined(IP_MTU_DISCOVER) && defined(IP_PMTUDISC_DONT) int i = IP_PMTUDISC_DONT; if (setsockopt(clientConnection->fd, SOL_IP, IP_MTU_DISCOVER, &i, sizeof(i)) < 0) debugs(33, 2, "WARNING: Path MTU discovery disabling failed on " << clientConnection << " : " << xstrerror()); #else static bool reported = false; if (!reported) { debugs(33, DBG_IMPORTANT, "NOTICE: Path MTU discovery disabling is not supported on your platform."); reported = true; } #endif } +} + +void +ConnStateData::start() +{ + // ensure a buffer is present for this connection + in.maybeMakeSpaceAvailable(); typedef CommCbMemFunT Dialer; AsyncCall::Pointer call = JobCallback(33, 5, Dialer, this, ConnStateData::connStateClosed); comm_add_close_handler(clientConnection->fd, call); if (Config.onoff.log_fqdn) fqdncache_gethostbyaddr(clientConnection->remote, FQDN_LOOKUP_IF_MISS); #if USE_IDENT if (Ident::TheConfig.identLookup) { ACLFilledChecklist identChecklist(Ident::TheConfig.identLookup, NULL, NULL); - identChecklist.src_addr = xact->tcpClient->remote; - identChecklist.my_addr = xact->tcpClient->local; + identChecklist.src_addr = clientConnection->remote; + identChecklist.my_addr = clientConnection->local; if (identChecklist.fastCheck() == ACCESS_ALLOWED) - Ident::Start(xact->tcpClient, clientIdentDone, this); + Ident::Start(clientConnection, clientIdentDone, this); } #endif clientdbEstablished(clientConnection->remote, 1); + needProxyProtocolHeader_ = port->flags.proxySurrogate; + if (needProxyProtocolHeader_) { + if (!proxyProtocolValidateClient()) // will close the connection on failure + return; + } + + // prepare any child API state that is needed + BodyProducer::start(); + HttpControlMsgSink::start(); + + // if all is well, start reading flags.readMore = true; + readSomeData(); } /** Handle a new connection on HTTP socket. */ void httpAccept(const CommAcceptCbParams ¶ms) { MasterXaction::Pointer xact = params.xaction; AnyP::PortCfgPointer s = xact->squidPort; // NP: it is possible the port was reconfigured when the call or accept() was queued. if (params.flag != Comm::OK) { // Its possible the call was still queued when the client disconnected debugs(33, 2, "httpAccept: " << s->listenConn << ": accept failure: " << xstrerr(params.xerrno)); return; } debugs(33, 4, HERE << params.conn << ": accepted"); fd_note(params.conn->fd, "client http connect"); if (s->tcp_keepalive.enabled) { commSetTcpKeepalive(params.conn->fd, s->tcp_keepalive.idle, s->tcp_keepalive.interval, s->tcp_keepalive.timeout); } ++ incoming_sockets_accepted; // Socket is ready, setup the connection manager to start using it ConnStateData *connState = new ConnStateData(xact); typedef CommCbMemFunT TimeoutDialer; AsyncCall::Pointer timeoutCall = JobCallback(33, 5, TimeoutDialer, connState, ConnStateData::requestTimeout); commSetConnTimeout(params.conn, Config.Timeout.request, timeoutCall); - connState->readSomeData(); + AsyncJob::Start(connState); #if USE_DELAY_POOLS fd_table[params.conn->fd].clientInfo = NULL; if (Config.onoff.client_db) { /* it was said several times that client write limiter does not work if client_db is disabled */ ClientDelayPools& pools(Config.ClientDelay.pools); ACLFilledChecklist ch(NULL, NULL, NULL); // TODO: we check early to limit error response bandwith but we // should recheck when we can honor delay_pool_uses_indirect // TODO: we should also pass the port details for myportname here. ch.src_addr = params.conn->remote; ch.my_addr = params.conn->local; for (unsigned int pool = 0; pool < pools.size(); ++pool) { /* pools require explicit 'allow' to assign a client into them */ if (pools[pool].access) { @@ -3524,41 +3796,41 @@ debugs(83, 3, "clientNegotiateSSL: FD " << fd << " negotiated cipher " << SSL_get_cipher(ssl)); client_cert = SSL_get_peer_certificate(ssl); if (client_cert != NULL) { debugs(83, 3, "clientNegotiateSSL: FD " << fd << " client certificate: subject: " << X509_NAME_oneline(X509_get_subject_name(client_cert), 0, 0)); debugs(83, 3, "clientNegotiateSSL: FD " << fd << " client certificate: issuer: " << X509_NAME_oneline(X509_get_issuer_name(client_cert), 0, 0)); X509_free(client_cert); } else { debugs(83, 5, "clientNegotiateSSL: FD " << fd << " has no certificate."); } - conn->readSomeData(); + AsyncJob::Start(conn); } /** * If SSL_CTX is given, starts reading the SSL handshake. * Otherwise, calls switchToHttps to generate a dynamic SSL_CTX. */ static void httpsEstablish(ConnStateData *connState, SSL_CTX *sslContext, Ssl::BumpMode bumpMode) { SSL *ssl = NULL; assert(connState); const Comm::ConnectionPointer &details = connState->clientConnection; if (sslContext && !(ssl = httpsCreate(details, sslContext))) return; typedef CommCbMemFunT TimeoutDialer; AsyncCall::Pointer timeoutCall = JobCallback(33, 5, TimeoutDialer, connState, ConnStateData::requestTimeout); commSetConnTimeout(details, Config.Timeout.request, timeoutCall); === modified file 'src/client_side.h' --- src/client_side.h 2014-07-14 09:48:47 +0000 +++ src/client_side.h 2014-07-30 12:41:05 +0000 @@ -313,40 +313,41 @@ \param request if it is not NULL also checks if the pinning info refers to the request client side HttpRequest \param CachePeer if it is not NULL also check if the CachePeer is the pinning CachePeer \return The details of the server side connection (may be closed if failures were present). */ const Comm::ConnectionPointer validatePinnedConnection(HttpRequest *request, const CachePeer *peer); /** * returts the pinned CachePeer if exists, NULL otherwise */ CachePeer *pinnedPeer() const {return pinning.peer;} bool pinnedAuth() const {return pinning.auth;} // pining related comm callbacks void clientPinnedConnectionClosed(const CommCloseCbParams &io); // comm callbacks void clientReadRequest(const CommIoCbParams &io); void connStateClosed(const CommCloseCbParams &io); void requestTimeout(const CommTimeoutCbParams ¶ms); // AsyncJob API + virtual void start(); virtual bool doneAll() const { return BodyProducer::doneAll() && false;} virtual void swanSong(); /// Changes state so that we close the connection and quit after serving /// the client-side-detected error response instead of getting stuck. void quitAfterError(HttpRequest *request); // meant to be private /// The caller assumes responsibility for connection closure detection. void stopPinnedConnectionMonitoring(); #if USE_OPENSSL /// called by FwdState when it is done bumping the server void httpsPeeked(Comm::ConnectionPointer serverConnection); /// Start to create dynamic SSL_CTX for host or uses static port SSL context. void getSslContextStart(); /** * Done create dynamic ssl certificate. * * \param[in] isNew if generated certificate is new, so we need to add this certificate to storage. @@ -382,40 +383,50 @@ #endif /* clt_conn_tag=tag annotation access */ const SBuf &connectionTag() const { return connectionTag_; } void connectionTag(const char *aTag) { connectionTag_ = aTag; } protected: void startDechunkingRequest(); void finishDechunkingRequest(bool withSuccess); void abortChunkedRequestBody(const err_type error); err_type handleChunkedRequestBody(size_t &putSize); void startPinnedConnectionMonitoring(); void clientPinnedConnectionRead(const CommIoCbParams &io); private: int connFinishedWithConn(int size); void clientAfterReadingRequests(); bool concurrentRequestQueueFilled() const; + /* PROXY protocol functionality */ + bool proxyProtocolValidateClient(); + bool parseProxyProtocolHeader(); + bool parseProxy1p0(); + bool parseProxy2p0(); + bool proxyProtocolError(const char *reason = NULL); + + /// whether PROXY protocol header is still expected + bool needProxyProtocolHeader_; + #if USE_AUTH /// some user details that can be used to perform authentication on this connection Auth::UserRequest::Pointer auth_; #endif HttpParser parser_; // XXX: CBDATA plays with public/private and leaves the following 'private' fields all public... :( #if USE_OPENSSL bool switchedToHttps_; /// The SSL server host name appears in CONNECT request or the server ip address for the intercepted requests String sslConnectHostOrIp; ///< The SSL server host name as passed in the CONNECT request String sslCommonName; ///< CN name for SSL certificate generation String sslBumpCertKey; ///< Key to use to store/retrieve generated certificate /// HTTPS server cert. fetching state for bump-ssl-server-first Ssl::ServerBump *sslServerBump; Ssl::CertSignAlgorithm signAlgorithm; ///< The signing algorithm to use #endif