[squid-users] Re: Correctoions (was TCP_SWAPFAIL/200)

From: Linda Walsh <squid-user_at_tlinx.org>
Date: Thu, 19 Apr 2012 13:30:19 -0700

Amos Jeffries wrote:

> On 18.04.2012 12:46, Linda Walsh wrote:

>>
>> It appears the local disk-store isn't growing over time -- so I'm
>> assuming it it telling
>> me the on-disk store isn't working right?
>
> Yes.
>
>>
>
> Please prioritise the core dump investigation.
> Please use gdb and find out what the crash is coming from. The crash and
> core-dump could be what is behind those incomplete or truncated responses.
>
>
> At this point I suggest updating to 3.2.0.17. There are a bunch of cache
> related fixes in that release. The new cache swap.state format will
> rebuild your cache_dir meta data from scratch and discard anything which
> has problems visible.

---
	I prioritized upgrading to the latest release and will go from there
(no need waiting time in things that may have been fixed).
> 
> If the core dumps continue with the new release, please prioritise 
> those. Most of the rest of what you describe may be side effects of the 
> crashing.
---
Will do...
>> http_access allow CONNECT Safe_Ports
> 
> NOTE: Dangerous. Safe_Ports includes port 1024-65535 and other ports 
> unsafe to permit CONNECT to. This could trivially be used as a 
> multi-stage spam proxy or worse.
>   ie a trivial DoS of "CONNECT localhost:8080 HTTP/1.1\n\n" results in 
> CONNECT loop until your machines port are all used up.
----
Good point, Just wanted to allow the general case of SSL/non-SSL over any of the
ports.  Just tryig to get things working at this point... though have had his 
config for soem time and no probs -- only connector is on my side and 'me', so
I shouldn't deny myself my own service unless I try!  ;-)
>> hierarchy_stoplist cgi-bin ?
> 
> You can drop hierarchy_stoplist from your config for simplicity.
---
check (some of these are carry-overs from prev configs or designed should
I use it for a different config).
 
>> cache_mem       8 GB
>> memory_replacement_policy heap GDSF
>> cache_replacement_policy heap LFUDA
>> cache_dir aufs /var/cache/squid 65535 64 64
> 
> You have multiple workers configured. AUFS does not support SMP at this 
> time. That could be the problem you have with SWAPFAIL, as the workers 
> collide altering the cache contents.
---
	Wah?   .. but but...how do I make use of SMP with AUFS?
If I go with uniq cache dirs that's very sub-optimal -- since I end up
with 12 separate cache areas, no?  when I want to fetch something from
the catch is there coordination about what content is in which worker's cache
that will automatically invoke the correct worker?   -- If so, that's cool,
but if not, then I'll reduce my hit rate by 1/N-cpus
> 
> To use this cache either wrap it in "if ${process_number} = N" tests for 
> the workers you want to do caching. Or add ${process_number} to the path 
> for each worker to get its own unique directory area.
> 
> eg:
>  cache_dir aufs /var/cache/squid_${process_number} 65535 64 64
> 
> or
> if ${process_number} = 1
>  cache_dir aufs /var/cache/squid 65535 64 64
> endif
> 
--- As said above, how do I get multi-benefit with asynchronous writes
and multi core?
>> url_rewrite_host_header off
>> url_rewrite_access deny all
>> url_rewrite_bypass on
> 
> You do not have any re-writer or redirector configured. These 
> url_rewrite_* can all go.
-----
	Is it harmful (it was for future 'expansion plans' -- no
rewriters yet, but was planning...)
> 
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
> 
> This above pattern ...
====
???? above what pattern?
>> refresh_pattern -i \.(ico|gif|jpg|png)   0 20%   4320
>> ignore-no-cache ignore-private override-expire
>> refresh_pattern -i ^http:   0 20%   4320    ignore-no-cache 
>> ignore-private
> 
> "private" means the contents MUST NOT be served to multiple clients. 
> Since you say this is a personal proxy just for you, thats okay but be 
> carefulif you ever open it for use by other people. Things like your 
> personal details embeded in same pages are cached by this.
----
	Got it... I should add a comment in that area to that effect
	That might be a enhancement -- like -
	ignore-private-same-client
> 
> "no-cache" *actually* just means check for updates before using the 
> cached version. This is usually not as useful as many tutorials make it 
> out to be.
---
	Well, dang tutorials -- I'm screwed if I follow, and if I don't! ;-)
> 
> 
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
> 
>  ... is meant to be here (second to last).
> 
>> refresh_pattern .               0       20%     4320
>> read_ahead_gap 256 MB
> 
> Uhm... 256 MB buffering per request.... sure you want to do that?
----
	I **think*** so... doesn't that mean it will buffer up to 256MB
of a request before my client is read for it?
	I think of the common case where I am saving a file and it takes me
a while to find the dir to save to.  I tweaked a few params in this area,
and it went from having to wait after I decided, to by the time I decided, it
was already downloaded.
	Would this be responsible for that?
>> workers 8
> 
> 
> Please run "squid -k parse" and fix the messages about obsolete or 
> changed config options.
----
	*sniff* -- I thought I caught most of those...
oh well..
Thanks!
	Will report back when have more info...
	(just started with 3.2.17, but not all the changes to the config above...
Received on Thu Apr 19 2012 - 20:30:40 MDT

This archive was generated by hypermail 2.2.0 : Fri Apr 20 2012 - 12:00:04 MDT