need help analyzing logs

From: Marc Haber <Marc.Haber-lists@dont-contact.us>
Date: Fri, 26 Feb 1999 20:50:22 GMT

Hi!

I am having bizarre problems with my squid. I have already asked on
this list and have done some research in the mean time. However, I am
not very successful in analyzing the logs I created. I am including
some of my logs with the comments that I have already made. I hope
that I am not too far off. Can anyone help me out with this?

My squid.conf (this time, i won't even try to remove actual host names
because I will surely goof up again on this one) looks like this:

|mh@torres:/usr/local/stow/squid/bin > pgrep -v "(^#)|(^$)" /etc/squid.conf
|http_port 3128
|icp_port 3130
|htcp_port 4827
|cache_peer junkbuster.rz.uni-karlsruhe.de parent 8000 3130 no-query
|cache_peer www-cache.rz.uni-karlsruhe.de parent 3128 3130 no-query
|cache_mem 8 MB
|maximum_object_size 1024 KB
|cache_dir /var/squid/cache 100 16 256
|cache_access_log /var/log/squid/access.log
|cache_log /var/log/squid/cache.log
|cache_store_log /var/log/squid/store.log
|emulate_httpd_log on
|mime_table /usr/local/etc/mime.conf
|pid_filename /var/run/squid.pid
|debug_options ALL,9 41,1 20,1 6,1 5,1 51,1 25,1 19,1 45,1 55,1 14,1 35,1 36,1
|log_fqdn on
|dns_children 2
|refresh_pattern . 10 20% 4320
|quick_abort_pct 80
|negative_ttl 5 minutes
|read_timeout 30 seconds
|pconn_timeout 60 seconds
|acl manager proto cache_object
|acl localhost src 127.0.0.1/255.255.255.255
|acl all src 0.0.0.0/0.0.0.0
|acl allowed_hosts src 192.168.130.0/255.255.255.0
|acl SSL_ports port 443 563
|acl Safe_ports port 80 21 70 1025-65535
|acl CONNECT method CONNECT
|acl password proxy_auth 300
|acl http proto http
|acl ftp proto ftp
|http_access deny CONNECT !SSL_ports
|http_access deny !Safe_ports
|http_access allow allowed_hosts
|http_access allow manager allowed_hosts
|http_access deny all
|icp_access allow allowed_hosts
|icp_access deny all
|cache_peer_access junkbuster.rz.uni-karlsruhe.de allow http
|cache_peer_access www-cache.rz.uni-karlsruhe.de allow ftp
|cache_peer_access junkbuster.rz.uni-karlsruhe.de deny all
|cache_peer_access www-cache.rz.uni-karlsruhe.de deny all
|cache_mgr Marc.Haber@gmx.de
|cache_effective_user nobody
|cache_effective_group nogroup
|visible_hostname www-cache.gf1.internal
|acl local-intranet dstdomain gf1.internal
|always_direct allow local-intranet
|never_direct allow all
|mh@torres:/usr/local/stow/squid/bin >

That way, I would like to have http requests going through the
junkbuster and ftp requests through www-cache (which is another squid
run by the university).

(1) Trying to get http://www.teltarif.de/db/neu.html makes the browser
display "CGI error! Too many or wrong parameters or wrong placement of
& markers!". Going to the site directly or telling the browser to
directly query the junkbuster works fine. My squid generates the
following log entries:

|1999/02/26 19:56:10| httpAccept: FD 9: accepted
|1999/02/26 19:56:10| comm_accept: FD 33: (11) Resource temporarily unavailable

What is this?

|1999/02/26 19:56:10| clientReadRequest: FD 9: reading request...
|1999/02/26 19:56:10| parseHttpRequest: Method is 'GET'
|1999/02/26 19:56:10| parseHttpRequest: Request is 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| parseHttpRequest: req_hdr = {Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-excel,
|Referer: http://www.teltarif.de/
|Accept-Language: de
|Accept-Encoding: gzip, deflate
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: www.teltarif.de
|Proxy-Connection: Keep-Alive
|Cookie: tarif=arcor-auskunft%2cdebitel-auskunft%2cfred%2ctalkline-auskunft%2ctelegate%2ctelekom-auskunft%2carcor%2cmobilcom-call%2cmobilc
|
|}
|1999/02/26 19:56:10| parseHttpRequest: end = {}
|1999/02/26 19:56:10| parseHttpRequest: prefix_sz = 764, req_line_sz = 49
|1999/02/26 19:56:10| parseHttpRequest: Request Header is
|Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-excel, application/msword, application/vnd.ms-powerpoint,
|Referer: http://www.teltarif.de/
|Accept-Language: de
|Accept-Encoding: gzip, deflate
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: www.teltarif.de
|Proxy-Connection: Keep-Alive
|Cookie: tarif=arcor-auskunft%2cdebitel-auskunft%2cfred%2ctalkline-auskunft%2ctelegate%2ctelekom-auskunft%2carcor%2cmobilcom-call%2cmobilc
|
|
|1999/02/26 19:56:10| parseHttpRequest: Complete request received
|1999/02/26 19:56:10| conn->in.offset = 0
|1999/02/26 19:56:10| aclCheck: checking 'http_access deny CONNECT !SSL_ports'
|1999/02/26 19:56:10| aclMatchAclList: checking CONNECT
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl CONNECT method CONNECT'
|1999/02/26 19:56:10| aclMatchAclList: returning 0
|1999/02/26 19:56:10| aclCheck: checking 'http_access deny !Safe_ports'
|1999/02/26 19:56:10| aclMatchAclList: checking !Safe_ports
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl Safe_ports port 80 21 70 1025-65535'
|1999/02/26 19:56:10| aclMatchAclList: returning 0
|1999/02/26 19:56:10| aclCheck: checking 'http_access allow allowed_hosts'
|1999/02/26 19:56:10| aclMatchAclList: checking allowed_hosts
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl allowed_hosts src 192.168.130.0/255.255.255.0'
|1999/02/26 19:56:10| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| aclCheck: match found, returning 1
|1999/02/26 19:56:10| aclCheckCallback: answer=1

This might be squid checking if the request is allowed. My host
192.168.130.31 is in the valid range 192.168.130.0/24, so the request
is granted.

|1999/02/26 19:56:10| clientAccessCheckDone: 'http://www.teltarif.de/db/neu.html' answer=1
|1999/02/26 19:56:10| redirectStart: 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| clientRedirectDone: 'http://www.teltarif.de/db/neu.html' result=NULL
|1999/02/26 19:56:10| aclCheckFast: list: (nil)
|1999/02/26 19:56:10| aclCheckFast: no matches, returning: 1
|1999/02/26 19:56:10| clientInterpretRequestHeaders: REQ_NOCACHE = NOT SET
|1999/02/26 19:56:10| clientInterpretRequestHeaders: REQ_CACHABLE = SET
|1999/02/26 19:56:10| clientInterpretRequestHeaders: REQ_HIERARCHICAL = SET
|1999/02/26 19:56:10| clientProcessRequest: GET 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| clientProcessRequest2: default HIT
|1999/02/26 19:56:10| clientProcessRequest: TCP_HIT for 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| creating rep: 0x84d3130
|1999/02/26 19:56:10| clientCacheHit: http://www.teltarif.de/db/neu.html, 267 bytes
|1999/02/26 19:56:10| refreshCheck: 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| refreshCheck: Matched '. 600 20% 259200'
|1999/02/26 19:56:10| refreshCheck: age = 13088
|1999/02/26 19:56:10| check_time: Fri, 26 Feb 1999 18:56:10 GMT
|1999/02/26 19:56:10| entry->timestamp: Fri, 26 Feb 1999 15:18:02 GMT
|1999/02/26 19:56:10| refreshCheck: YES: default stale
|1999/02/26 19:56:10| clientProcessMiss: 'GET http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| ctx: enter level 0: 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| destroying rep: 0x84d3130
|1999/02/26 19:56:10| ctx: exit level 0
|1999/02/26 19:56:10| creating rep: 0x84d3130
|1999/02/26 19:56:10| aclCheckFast: list: (nil)
|1999/02/26 19:56:10| aclCheckFast: no matches, returning: 1
|1999/02/26 19:56:10| fwdStart: 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| peerSelect: http://www.teltarif.de/db/neu.html
|1999/02/26 19:56:10| peerSelectFoo: 'GET www.teltarif.de'
|1999/02/26 19:56:10| aclCheck: checking 'always_direct allow local-intranet'
|1999/02/26 19:56:10| aclMatchAclList: checking local-intranet
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl local-intranet dstdomain gf1.internal'
|1999/02/26 19:56:10| aclMatchDomainList: checking 'www.teltarif.de'
|1999/02/26 19:56:10| aclMatchDomainList: 'www.teltarif.de' NOT found
|1999/02/26 19:56:10| aclMatchAclList: returning 0
|1999/02/26 19:56:10| aclCheck: NO match found, returning 0
|1999/02/26 19:56:10| aclCheckCallback: answer=0
|1999/02/26 19:56:10| peerCheckAlwaysDirectDone: 0
|1999/02/26 19:56:10| peerSelectFoo: 'GET www.teltarif.de'
|1999/02/26 19:56:10| aclCheck: checking 'never_direct allow all'
|1999/02/26 19:56:10| aclMatchAclList: checking all
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:10| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| aclCheck: match found, returning 1
|1999/02/26 19:56:10| aclCheckCallback: answer=1
|1999/02/26 19:56:10| peerCheckNeverDirectDone: 1
|1999/02/26 19:56:10| peerSelectFoo: 'GET www.teltarif.de'
|1999/02/26 19:56:10| peerSelectFoo: direct = DIRECT_NO
|1999/02/26 19:56:10| aclCheckFast: list: 0x81f6e08
|1999/02/26 19:56:10| aclMatchAclList: checking http
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl http proto http'
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| aclCheckFast: list: 0x81f6f70
|1999/02/26 19:56:10| aclMatchAclList: checking ftp
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl ftp proto ftp'
|1999/02/26 19:56:10| aclMatchAclList: returning 0
|1999/02/26 19:56:10| aclMatchAclList: checking all
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:10| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| getSingleParent: returning junkbuster.rz.uni-karlsruhe.de

This looks fine. The request has been identifies as being http and the
junkbuster will be queried.

|1999/02/26 19:56:10| peerSelect: found single parent, skipping ICP query
|1999/02/26 19:56:10| peerSelectIcpPing: http://www.teltarif.de/db/neu.html
|1999/02/26 19:56:10| aclCheckFast: list: 0x81f6e08
|1999/02/26 19:56:10| aclMatchAclList: checking http
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl http proto http'
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| aclCheckFast: list: 0x81f6f70
|1999/02/26 19:56:10| aclMatchAclList: checking ftp
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl ftp proto ftp'
|1999/02/26 19:56:10| aclMatchAclList: returning 0
|1999/02/26 19:56:10| aclMatchAclList: checking all
|1999/02/26 19:56:10| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:10| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:10| aclMatchAclList: returning 1
|1999/02/26 19:56:10| neighborsCount: 0
|1999/02/26 19:56:10| peerSelectIcpPing: counted 0 neighbors
|1999/02/26 19:56:10| peerSelectFoo: After peerSelectIcpPing.
|1999/02/26 19:56:10| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:10| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:10| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:10| whichPeer: from 129.13.197.41 port 3130
|1999/02/26 19:56:10| peerSelect: SINGLE_PARENT/junkbuster.rz.uni-karlsruhe.de

This looks like a repeat. Why is squid going through the same motions
again?

|1999/02/26 19:56:10| peerSelectCallback: http://www.teltarif.de/db/neu.html
|1999/02/26 19:56:10| fwdConnectStart: http://www.teltarif.de/db/neu.html
|1999/02/26 19:56:10| fwdDispatch: FD 9: Fetching 'GET http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:10| httpStart: "GET http://www.teltarif.de/db/neu.html"
|1999/02/26 19:56:10| httpSendRequest: FD 12: httpState 0x84c6598.
|1999/02/26 19:56:10| httpBuildRequestHeader: range specs: (nil), cachable: 1; we_do_ranges: 0
|1999/02/26 19:56:10| httpBuildRequestHeader: Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-excel, appli
|1999/02/26 19:56:10| httpBuildRequestHeader: Referer: http://www.teltarif.de/
|1999/02/26 19:56:10| httpBuildRequestHeader: Accept-Language: de
|1999/02/26 19:56:10| httpBuildRequestHeader: Accept-Encoding: gzip, deflate
|1999/02/26 19:56:10| httpBuildRequestHeader: User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|1999/02/26 19:56:10| httpBuildRequestHeader: Host: www.teltarif.de
|1999/02/26 19:56:10| httpBuildRequestHeader: Proxy-Connection: Keep-Alive
|1999/02/26 19:56:10| httpBuildRequestHeader: Cookie: tarif=arcor-auskunft%2cdebitel-auskunft%2cfred%2ctalkline-auskunft%2ctelegate%2ctele
|1999/02/26 19:56:10| getMaxAge: 'http://www.teltarif.de/db/neu.html'

This is the http request being built. How do I learn which host and
which port that request is being sent to?

|1999/02/26 19:56:10| httpSendRequest: FD 12:
|GET http://www.teltarif.de/db/neu.html HTTP/1.0
|Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-excel, application/msword, application/vnd.ms-powerpoint,
|Referer: http://www.teltarif.de/
|Accept-Language: de
|Accept-Encoding: gzip, deflate
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: www.teltarif.de
|Cookie: tarif=arcor-auskunft%2cdebitel-auskunft%2cfred%2ctalkline-auskunft%2ctelegate%2ctelekom-auskunft%2carcor%2cmobilcom-call%2cmobilc
|Via: 1.0 www-cache.gf1.internal:3128 (Squid/2.0.PATCH2)
|X-Forwarded-For: 192.168.130.31
|Cache-Control: max-age=259200
|Proxy-Connection: keep-alive
|
|
|1999/02/26 19:56:10| httpSendComplete: FD 12: size 885: errflag 0.

This request seems to be fine.

|1999/02/26 19:56:11| httpReadReply: FD 12: len 194.
|1999/02/26 19:56:11| httpProcessReplyHeader: key '25D4D7CDF9736DCC01B498992DB9DF7E'
|1999/02/26 19:56:11| ctx: enter level 0: 'http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:11| GOT HTTP REPLY HDR:
|---------
|HTTP/1.0 200 OK
|Date: Fri, 26 Feb 1999 18:57:05 GMT
|Server: Apache/1.3.4 (Unix)
|Content-Type: text/plain
|Age: 0
|X-Cache: MISS from www-cache.rz.uni-karlsruhe.de
|Proxy-Connection: close
|
|
|----------

"MISS" means that this page is not in the parent cache?

|1999/02/26 19:56:11| httpProcessReplyHeader: HTTP CODE: 200
|1999/02/26 19:56:11| ctx: exit level 0
|1999/02/26 19:56:11| clientSendMoreData: http://www.teltarif.de/db/neu.html, 194 bytes
|1999/02/26 19:56:11| clientSendMoreData: FD 9 'http://www.teltarif.de/db/neu.html', out.offset=0
|1999/02/26 19:56:11| creating rep: 0x84d1930
|1999/02/26 19:56:11| clientSendMoreData: Appending 0 bytes after 194 bytes of headers
|1999/02/26 19:56:11| packing sline 0x84d195c using 0xbffff3d4:
|1999/02/26 19:56:11| HTTP/1.0 200 OK
|1999/02/26 19:56:11| destroying rep: 0x84d1930
|1999/02/26 19:56:11| httpPconnTransferDone: FD 12
|1999/02/26 19:56:11| clientWriteComplete: FD 9, sz 237, err 0, off 194, len -1
|1999/02/26 19:56:11| httpReadReply: FD 12: len 73.
|1999/02/26 19:56:11| clientSendMoreData: http://www.teltarif.de/db/neu.html, 73 bytes
|1999/02/26 19:56:11| clientSendMoreData: FD 9 'http://www.teltarif.de/db/neu.html', out.offset=194
|1999/02/26 19:56:11| httpPconnTransferDone: FD 12
|1999/02/26 19:56:11| clientWriteComplete: FD 9, sz 73, err 0, off 267, len -1
|1999/02/26 19:56:11| httpReadReply: FD 12: len 0.
|1999/02/26 19:56:11| clientSendMoreData: http://www.teltarif.de/db/neu.html, 0 bytes
|1999/02/26 19:56:11| clientSendMoreData: FD 9 'http://www.teltarif.de/db/neu.html', out.offset=267
|1999/02/26 19:56:11| clientWriteComplete: FD 9, sz 0, err 0, off 267, len 267
|1999/02/26 19:56:11| clientWriteComplete: FD 9 transfer is DONE
|1999/02/26 19:56:11| clientWriteComplete: closing, content_length < 0
|1999/02/26 19:56:11| connStateFree: FD 9
|1999/02/26 19:56:11| httpRequestFree: http://www.teltarif.de/db/neu.html
|1999/02/26 19:56:11| httpRequestFree: al.url='http://www.teltarif.de/db/neu.html'
|1999/02/26 19:56:11| fwdServerClosed: FD 12 http://www.teltarif.de/db/neu.html

This wher I am losing it. I believe that squid should now ask its
parent to retrieve the page for it. However, I don't even see this
strange error message being built. What am I missing here?

(2) Next try is to retrieve an ftp URL
(ftp://ftp.rz.uni-karlsruhe.de/, a local ftp server). Trying to do so
makes my browser display a squid error message "zero sized reply,
squid did not receive any data for this request". Here are the log
entries:
|1999/02/26 19:56:32| httpAccept: FD 9: accepted
|1999/02/26 19:56:32| comm_accept: FD 33: (11) Resource temporarily unavailable

Again. What is this?

|1999/02/26 19:56:32| clientReadRequest: FD 9: reading request...
|1999/02/26 19:56:32| parseHttpRequest: Method is 'GET'
|1999/02/26 19:56:32| parseHttpRequest: Request is 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| parseHttpRequest: req_hdr = {User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: ftp.rz.uni-karlsruhe.de:21
|Proxy-Connection: Keep-Alive
|
|}
|1999/02/26 19:56:32| parseHttpRequest: end = {}
|1999/02/26 19:56:32| parseHttpRequest: prefix_sz = 172, req_line_sz = 45
|1999/02/26 19:56:32| parseHttpRequest: Request Header is
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: ftp.rz.uni-karlsruhe.de:21
|Proxy-Connection: Keep-Alive
|
|
|1999/02/26 19:56:32| parseHttpRequest: Complete request received

looks fine.

|1999/02/26 19:56:32| conn->in.offset = 0
|1999/02/26 19:56:32| aclCheck: checking 'http_access deny CONNECT !SSL_ports'
|1999/02/26 19:56:32| aclMatchAclList: checking CONNECT
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl CONNECT method CONNECT'
|1999/02/26 19:56:32| aclMatchAclList: returning 0
|1999/02/26 19:56:32| aclCheck: checking 'http_access deny !Safe_ports'
|1999/02/26 19:56:32| aclMatchAclList: checking !Safe_ports
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl Safe_ports port 80 21 70 1025-65535'
|1999/02/26 19:56:32| aclMatchAclList: returning 0
|1999/02/26 19:56:32| aclCheck: checking 'http_access allow allowed_hosts'
|1999/02/26 19:56:32| aclMatchAclList: checking allowed_hosts
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl allowed_hosts src 192.168.130.0/255.255.255.0'
|1999/02/26 19:56:32| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:32| aclMatchAclList: returning 1
|1999/02/26 19:56:32| aclCheck: match found, returning 1
|1999/02/26 19:56:32| aclCheckCallback: answer=1

again, my host is allowed to access my squid. Fine.

|1999/02/26 19:56:32| clientAccessCheckDone: 'ftp://ftp.rz.uni-karlsruhe.de/' answer=1
|1999/02/26 19:56:32| redirectStart: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| clientRedirectDone: 'ftp://ftp.rz.uni-karlsruhe.de/' result=NULL
|1999/02/26 19:56:32| aclCheckFast: list: (nil)
|1999/02/26 19:56:32| aclCheckFast: no matches, returning: 1
|1999/02/26 19:56:32| clientInterpretRequestHeaders: REQ_NOCACHE = NOT SET
|1999/02/26 19:56:32| clientInterpretRequestHeaders: REQ_CACHABLE = SET
|1999/02/26 19:56:32| clientInterpretRequestHeaders: REQ_HIERARCHICAL = SET
|1999/02/26 19:56:32| clientProcessRequest: GET 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| clientProcessRequest2: storeGet() MISS
|1999/02/26 19:56:32| clientProcessRequest: TCP_MISS for 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| clientProcessMiss: 'GET ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| creating rep: 0x84d1930
|1999/02/26 19:56:32| aclCheckFast: list: (nil)
|1999/02/26 19:56:32| aclCheckFast: no matches, returning: 1
|1999/02/26 19:56:32| fwdStart: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| peerSelect: ftp://ftp.rz.uni-karlsruhe.de/

This is where squid starts to select the peer.

|1999/02/26 19:56:32| peerSelectFoo: 'GET ftp.rz.uni-karlsruhe.de'
|1999/02/26 19:56:32| aclCheck: checking 'always_direct allow local-intranet'
|1999/02/26 19:56:32| aclMatchAclList: checking local-intranet
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl local-intranet dstdomain gf1.internal'
|1999/02/26 19:56:32| aclMatchDomainList: checking 'ftp.rz.uni-karlsruhe.de'
|1999/02/26 19:56:32| aclMatchDomainList: 'ftp.rz.uni-karlsruhe.de' NOT found
|1999/02/26 19:56:32| aclMatchAclList: returning 0
|1999/02/26 19:56:32| aclCheck: NO match found, returning 0
|1999/02/26 19:56:32| aclCheckCallback: answer=0
|1999/02/26 19:56:32| peerCheckAlwaysDirectDone: 0
|1999/02/26 19:56:32| peerSelectFoo: 'GET ftp.rz.uni-karlsruhe.de'
|1999/02/26 19:56:32| aclCheck: checking 'never_direct allow all'
|1999/02/26 19:56:32| aclMatchAclList: checking all
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:32| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:32| aclMatchAclList: returning 1
|1999/02/26 19:56:32| aclCheck: match found, returning 1
|1999/02/26 19:56:32| aclCheckCallback: answer=1
|1999/02/26 19:56:32| peerCheckNeverDirectDone: 1
|1999/02/26 19:56:32| peerSelectFoo: 'GET ftp.rz.uni-karlsruhe.de'
|1999/02/26 19:56:32| peerSelectFoo: direct = DIRECT_NO

the URL is not to be fetched directly.

|1999/02/26 19:56:32| aclCheckFast: list: 0x81f6e08
|1999/02/26 19:56:32| aclMatchAclList: checking http
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl http proto http'
|1999/02/26 19:56:32| aclMatchAclList: returning 0
|1999/02/26 19:56:32| aclMatchAclList: checking all
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:32| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:32| aclMatchAclList: returning 1
|1999/02/26 19:56:32| aclCheckFast: list: 0x81f6f70
|1999/02/26 19:56:32| aclMatchAclList: checking ftp
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl ftp proto ftp'
|1999/02/26 19:56:32| aclMatchAclList: returning 1

this request matches two acls. Shouldn't processing stop when the
first acl matches? However, this seems to be fine since the request
needs to be identified as being ftp, which it actually is.

|1999/02/26 19:56:32| getSingleParent: returning www-cache.rz.uni-karlsruhe.de

Looks good, www-cache is the squid parent that can handle ftp
transfers.

|1999/02/26 19:56:32| peerSelect: found single parent, skipping ICP query
|1999/02/26 19:56:32| peerSelectIcpPing: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| aclCheckFast: list: 0x81f6e08
|1999/02/26 19:56:32| aclMatchAclList: checking http
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl http proto http'
|1999/02/26 19:56:32| aclMatchAclList: returning 0
|1999/02/26 19:56:32| aclMatchAclList: checking all
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
|1999/02/26 19:56:32| aclMatchIp: '192.168.130.31' found
|1999/02/26 19:56:32| aclMatchAclList: returning 1
|1999/02/26 19:56:32| aclCheckFast: list: 0x81f6f70
|1999/02/26 19:56:32| aclMatchAclList: checking ftp
|1999/02/26 19:56:32| aclMatchAcl: checking 'acl ftp proto ftp'
|1999/02/26 19:56:32| aclMatchAclList: returning 1
|1999/02/26 19:56:32| neighborsCount: 0
|1999/02/26 19:56:32| peerSelectIcpPing: counted 0 neighbors
|1999/02/26 19:56:32| peerSelectFoo: After peerSelectIcpPing.
|1999/02/26 19:56:32| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:32| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:32| whichPeer: from 0.0.0.0 port 0
|1999/02/26 19:56:32| whichPeer: from 129.13.197.41 port 3130
|1999/02/26 19:56:32| peerSelect: SINGLE_PARENT/junkbuster.rz.uni-karlsruhe.de

Again, this looks like squid is going through the peer select motions
again. This time, junkbuster is yielded as peer, which is wrong since
junkbuster can't do ftp.

|1999/02/26 19:56:32| peerSelectCallback: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| fwdConnectStart: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| fwdDispatch: FD 9: Fetching 'GET ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| httpStart: "GET ftp://ftp.rz.uni-karlsruhe.de/"
|1999/02/26 19:56:32| httpSendRequest: FD 12: httpState 0x84c6598.
|1999/02/26 19:56:32| httpBuildRequestHeader: range specs: (nil), cachable: 1; we_do_ranges: 0
|1999/02/26 19:56:32| httpBuildRequestHeader: User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|1999/02/26 19:56:32| httpBuildRequestHeader: Host: ftp.rz.uni-karlsruhe.de:21
|1999/02/26 19:56:32| httpBuildRequestHeader: Proxy-Connection: Keep-Alive
|1999/02/26 19:56:32| getMaxAge: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:32| httpSendRequest: FD 12:
|GET ftp://ftp.rz.uni-karlsruhe.de/ HTTP/1.0
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: ftp.rz.uni-karlsruhe.de:21
|Via: 1.0 www-cache.gf1.internal:3128 (Squid/2.0.PATCH2)
|X-Forwarded-For: 192.168.130.31
|Cache-Control: max-age=259200
|Proxy-Connection: keep-alive
|
|
|1999/02/26 19:56:32| httpSendComplete: FD 12: size 293: errflag 0.

I believe this request is being sent to junkbuster.

|1999/02/26 19:56:32| httpReadReply: FD 12: len 0.
|1999/02/26 19:56:32| fwdFail: ERR_ZERO_SIZE_OBJECT "Service Unavailable"
| ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| fwdServerClosed: FD 12 ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| fwdServerClosed: re-forwarding (1 tries, 0 secs)

Junkbuster seems to immediately hang up on that request without any
error message. I have verified this by sending this request to
junkbuster via telnet.

|1999/02/26 19:56:32| fwdConnectStart: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:32| fwdDispatch: FD 9: Fetching 'GET ftp://ftp.rz.uni-karlsruhe.de/'

This process seems to repeat 10 times. I have deleted the subsequent
tries that look the same without verifying that they are exactly the
same.

|1999/02/26 19:56:33| httpStart: "GET ftp://ftp.rz.uni-karlsruhe.de/"
|1999/02/26 19:56:33| httpSendRequest: FD 13: httpState 0x84c6598.
|1999/02/26 19:56:33| httpBuildRequestHeader: range specs: (nil), cachable: 1; we_do_ranges: 0
|1999/02/26 19:56:33| httpBuildRequestHeader: User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|1999/02/26 19:56:33| httpBuildRequestHeader: Host: ftp.rz.uni-karlsruhe.de:21
|1999/02/26 19:56:33| httpBuildRequestHeader: Proxy-Connection: Keep-Alive
|1999/02/26 19:56:33| getMaxAge: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:33| httpSendRequest: FD 13:
|GET ftp://ftp.rz.uni-karlsruhe.de/ HTTP/1.0
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: ftp.rz.uni-karlsruhe.de:21
|Via: 1.0 www-cache.gf1.internal:3128 (Squid/2.0.PATCH2)
|X-Forwarded-For: 192.168.130.31
|Cache-Control: max-age=259200
|Proxy-Connection: keep-alive
|
|
|1999/02/26 19:56:34| httpSendComplete: FD 13: size 293: errflag 0.
|1999/02/26 19:56:34| httpReadReply: FD 13: len 0.
|1999/02/26 19:56:34| fwdFail: ERR_ZERO_SIZE_OBJECT "Service Unavailable"
| ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:34| fwdServerClosed: FD 13 ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:34| fwdServerClosed: re-forwarding (10 tries, 2 secs)
|1999/02/26 19:56:34| fwdConnectStart: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:34| fwdDispatch: FD 9: Fetching 'GET ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| httpStart: "GET ftp://ftp.rz.uni-karlsruhe.de/"
|1999/02/26 19:56:34| httpSendRequest: FD 12: httpState 0x84c6598.
|1999/02/26 19:56:34| httpBuildRequestHeader: range specs: (nil), cachable: 1; we_do_ranges: 0
|1999/02/26 19:56:34| httpBuildRequestHeader: User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|1999/02/26 19:56:34| httpBuildRequestHeader: Host: ftp.rz.uni-karlsruhe.de:21
|1999/02/26 19:56:34| httpBuildRequestHeader: Proxy-Connection: Keep-Alive
|1999/02/26 19:56:34| getMaxAge: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| httpSendRequest: FD 12:
|GET ftp://ftp.rz.uni-karlsruhe.de/ HTTP/1.0
|User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
|Host: ftp.rz.uni-karlsruhe.de:21
|Via: 1.0 www-cache.gf1.internal:3128 (Squid/2.0.PATCH2)
|X-Forwarded-For: 192.168.130.31
|Cache-Control: max-age=259200
|Proxy-Connection: keep-alive
|
|
|1999/02/26 19:56:34| httpSendComplete: FD 12: size 293: errflag 0.
|1999/02/26 19:56:34| httpReadReply: FD 12: len 0.
|1999/02/26 19:56:34| fwdFail: ERR_ZERO_SIZE_OBJECT "Service Unavailable"
| ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:34| fwdServerClosed: FD 12 ftp://ftp.rz.uni-karlsruhe.de/

This was the last try, squid seems to give up.

|1999/02/26 19:56:34| creating rep: 0x836bfe0
|1999/02/26 19:56:34| errorConvert: %U --> 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| errorConvert: %U --> 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| errorConvert: %T --> 'Fri, 26 Feb 1999 18:56:34 GMT'
|1999/02/26 19:56:34| errorConvert: %h --> 'www-cache.gf1.internal'
|1999/02/26 19:56:34| errorConvert: %s --> 'Squid/2.0.PATCH2'
|1999/02/26 19:56:34| errorConvert: %S --> '
|<br clear="all">
|<hr noshade size=1>
|Generated Fri, 26 Feb 1999 18:56:34 GMT by www-cache.gf1.internal (Squid/2.0.PATCH2)
|</BODY></HTML>
|'
|1999/02/26 19:56:34| packing sline 0x836c00c using 0xbffff3e8:
|1999/02/26 19:56:34| HTTP/1.0 503 Service Unavailable
|1999/02/26 19:56:34| destroying rep: 0x836bfe0
|1999/02/26 19:56:34| clientSendMoreData: ftp://ftp.rz.uni-karlsruhe.de/, 851 bytes
|1999/02/26 19:56:34| clientSendMoreData: FD 9 'ftp://ftp.rz.uni-karlsruhe.de/', out.offset=0
|1999/02/26 19:56:34| creating rep: 0x836bfe0
|1999/02/26 19:56:34| clientSendMoreData: Appending 608 bytes after 243 bytes of headers
|1999/02/26 19:56:34| packing sline 0x836c00c using 0xbffff32c:
|1999/02/26 19:56:34| HTTP/1.0 503 Service Unavailable
|1999/02/26 19:56:34| destroying rep: 0x836bfe0
|1999/02/26 19:56:34| clientWriteComplete: FD 9, sz 919, err 0, off 851, len 851
|1999/02/26 19:56:34| clientWriteComplete: FD 9 transfer is DONE
|1999/02/26 19:56:34| clientWriteComplete: closing, content_length < 0
|1999/02/26 19:56:34| connStateFree: FD 9
|1999/02/26 19:56:34| httpRequestFree: ftp://ftp.rz.uni-karlsruhe.de/
|1999/02/26 19:56:34| httpRequestFree: al.url='ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| ctx: enter level 0: 'ftp://ftp.rz.uni-karlsruhe.de/'
|1999/02/26 19:56:34| destroying rep: 0x84d1930
|1999/02/26 19:56:39| ctx: exit level 0

This seems to be the generation of the error message and its
transmission to the client.

Can anybody please enlighten me?

Any hints will be appreciated.

Greetings
Marc

-- 
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber          |   " Questions are the         | Mailadresse im Header
Karlsruhe, Germany  |     Beginning of Wisdom "     | Fon: *49 721 966 32 15
Nordisch by Nature  | Lt. Worf, TNG "Rightful Heir" | Fax: *49 721 966 31 29
Received on Fri Feb 26 1999 - 14:48:17 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:46 MST