Re: [squid-users] how about releasing the major "supported" linux distros results? and what about dynamic content sites?

From: Eliezer Croitoru <eliezer_at_ec.hadorhabaac.com>
Date: Wed, 04 Jan 2012 12:48:56 +0200

On 04/01/2012 11:15, Amos Jeffries wrote:
> On 4/01/2012 5:32 p.m., Eliezer Croitoru wrote:
>> i have couple of things things:
>> i have made a long way of testing squid for a couple of days on
>> various versions of linux distors such as centos 5.7\6.0\6.2 fedora
>> 15\16 ubuntu 10.04.3\11.10 gentoo(on the last portage) using tproxy
>> and forward proxy. (all i686 but ubuntu x64)
>> i couldnt find any solid info on squid to work with these systems so i
>> researched.
>> i have used squid 3.1.18 3.2.0.8 3.2.0.13(latest daily source) 3.2.0.14.
>> on centos and ubuntu squid 3.2.0.14 was unable to work smoothly on
>> interception mode but on regular forward mode it was fine.
>>
>> on the centos 5 branch there is no tproxy support built-in the regular
>> kernel so you must recompile the kernel to have tproxy support.
>> on the centos 6 branch there is tproxy support built-in the basic
>> kernel but nothing i did (disabling selinux, loading modules and some
>> other stuff) didnt make the tproxy to work.
>> because i started with centos i throughout that i'm doing something
>> wrong but after checking ubuntu, fedora and gentoo i understood that
>> the problem is with centos 6 tproxy or other things but not squid.
>
>> also i didn't found any logic README or info about tproxy that can
>> explain the logic of it so in a case of problem it can be debugged.
>
> http://wiki.squid-cache.org/Feature/Tproxy4 has everything there is. The
> "More Info" link to Balabit is a README that covers what the kernal
> internals do. The internals of Squid is only is two trivial bits;
> inverting the IPs on arrival, binding the spoofed on on exit, the rest
> is generic intercepted traffic handling (parsing the URL in originserver
> format, and doing IP security checks on Host header). These are well
> tested now and work in 3.2.0.14.
>
> I'd like to know what Ubuntu and Gentoo versions you tested with and
> what you conclude the problems are there. Both to push for fixes and
> update that feature page.

ubutnu 11.10(i386) + 10.4.3(i386+x64) with latest updates.
the list of development and libs packages that i have used:
sudo apt-get install build-essential libldap2-dev libpam0g-dev libdb-dev
dpatch cdbs libsasl2-dev debhelper libcppunit-dev libkrb5-dev comerr-dev
libcap2-dev libexpat1-dev libxml2-dev libcap2-dev dpkg-dev curl
libssl-dev libssl0.9.8 libssl0.9.8-dbg libcurl4-openssl-dev

the stablest version was 3.2.0.8 (there was a problem with the ssl
dependencies that was fixed later).
since version 3.2.0.12 i had speed problems.
since version 3.2.0.13 i had a problem that some pages that are not
supposed to be cached are being cached and on version 3.2.0.14 on
interception mode i'm gettings "request is too long" something like that
(there is a thread on the mailing list).

the gentoo i was using is with a month old portable with linux kernel
2.6.36-rXXX(dont remember now)(i386)

on gentoo you have all you need to build squid with the distro.
just configure and make.(the init.d scripts was taken from gentoo
portage and modified)

i am building my squid with:
./configure --prefix=/opt/squid32013 --includedir=/include
--mandir=/share/man --infodir=/share/info
--localstatedir=/opt/squid32013/var --disable-maintainer-mode
--disable-dependency-tracking --disable-silent-rules --enable-inline
--enable-async-io=8 --enable-storeio=ufs,aufs
--enable-removal-policies=lru,heap --enable-delay-pools
--enable-cache-digests --enable-underscores --enable-icap-client
--enable-follow-x-forwarded-for
--enable-digest-auth-helpers=ldap,password
--enable-negotiate-auth-helpers=squid_kerb_auth
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group
--enable-arp-acl --enable-esi--disable-translation
--with-logdir=/opt/squid32013/var/log
--with-pidfile=/var/run/squid32013.pid --with-filedescriptors=65536
--with-large-files --with-default-user=proxy --enable-linux-netfilter
--enable-ltdl-convenience --enable-snmp

i'm changing the directory for squid by the version release.

the funny thing is that fedora 16 with kernel 3.1.6 and squid 3.2.0.13
from the repo just works fine.

>
>>
>> after all this, what do you think about releasing a list of
>> "supported" linux distors that seems to work fine on every squid release?
>> i'm talking about the major releases and not about "puppy linux" or
>> "dsl".
>
> You mean as part of the release? that is kind of tricky because none of
> the distros does run-testing until after the release package is
> available. Sometimes months or even years after, as in the case of
> certain RPM based distros having an 12-18 month round-trip between our
> release and bug feedback. Naturally, its way too late to bother
> announcing those problems and the faster moving distros appear to have
> numerous unfixed bugs in a constantly changing set, a very fuzzy
> situation in overview. If I'm aware of anything problematic to a
> specific distro in advance I try to mention it in the release
> announcement. http://wiki.squid-cache.org/BestOsForSquid has a list of
> the major distros Squid works on, but not correlated to particular
> releases or features. That could be updated to correlate with Squid
> series for better documentation of what to expect.
>
> I also get the impression that you want a feature-by-feature support
> rundown on each distro. With an uncounted (literally) number of features
> in Squid to be tested and very little automatiion coverage this is a lot
> of work just to get a reasonably accurate idea. We try though as part of
> the bug detection and removal work. Assistance is very welcome, our TODO
> list has a few items anybody can help with:

well not exaclty.
just the main for usage.
most of the centos or redhat will be for a more of office enviroment so
on them you will need auth helpers but on an ISP system you wont need
auth helpers but you will need wccp snmp tproxy so we can minimize the
checks to only 1-3 sets of configuration per distro.

>
> * some help wanted documenting (even just a catalog list) all the
> features in Squid. http://wiki.squid-cache.org/FeatureComparison and
> http://wiki.squid-cache.org/Features need extending and correlating.
>
> * resource donations wanted for automated tests. We run build tests on
> major distros on multiple architectures. see
> http://wiki.squid-cache.org/BuildFarm. But are limited by lack of some
> hardware architectures and , CPU time available on the hardware we have,
> and access to the distro itself in some cases (MacOS, Solaris, Windows,
> AIX,...spot the trend). Donation details on how to help extend that are
> outlined on the wiki page.
> + Given more CPU time we could start to look at run-time testing
> features from the list above, but that is a bit problematic with the
> present resources. Help would be very welcome.
>
> * help wanted adding automated test coverage. The tests we have so far
> are a bit sparse, many of the features are not distro specific and could
> be tested as units during the existing build scans, but are not yet.
> Interested persons carrying patches are very welcome. We use cppunit and
> STUB frameworks which make test writing relatively easy, but it can be
> time consuming.
> + even just a coverage list of classes versus what is/not tested so far
> would be helpful to target future work.
>
> * help wanted adding/updating Feature/* pages in the wiki as bugs are
> discovered and analysed. Likewise KnowledgeBase/* pages for all the
> major distros with distro-specific details as and when behaviour quirks
> are found.
sory but i'm not that good with writing.
in my language (hebrew) i can write stuff and maybe later translate it
to english.

another thing is that my bad side is programming and coding.

>
> (Sorry its a bit of a long plea, but this is one area I'm keen to see
> progress and all the dev have spent many hours unsponsored time slaving
> away to get this far.)
>
>>
>> this is the place to rate the linux distro you would like squid to be
>> tested on.
>>
>>
>> another subject:
>> what dynamic content or uncacheable sites by squid will you want to to
>> be able to cache?
>>
>> let say youtube. ms updates and stuff like that.
>> i know that cachevideo is available but i think that with some effort
>> we can build some basic concept that will benefit all of us.
>>
>> votes for sites will be gladly accepted.
>>
>> (i will be glad to explain the reasons that makes these sites objects
>> uncacheble in many cases to anyone that want to understand it.
>> also how and why squid is doing such a great job.)
>>
>
> Facebook traffic is another FAQ (or at least was last year). Explaining
> why so much of their traffic does not cache, and why we must suffer
> instead of forcing it to, is in need of some good documentation.

i was working on a basic dynamic-content video providers caching using
nginx and a ruby url rewriter.
i took the idea from : http://code.google.com/p/youtube-cache/
he used nginx good sides to make youtube cache.
i also made it work for vimeo.com and facebook videos.
i have a list of sites and the complexity of some urls that cannot be
cached by squid.
my next target is windows updates.

i saw an intresting cache method on
http://www.glob.com.au/windowsupdate_cache/windowsupdate_cache.tar.gz
it combines a url rewriter with has another proccess that is downloading
the files into the cache dir.
a long time ago throughout about a rating based caching instead of time
period and also another idea was "object injection" into squid.
in the past what i did was to create an acl with "ignore-reload"
(violating any http thing you can think of) and then the first time that
the file is coming through nginx it makes the file cachable for squid,
then i can erase the video file from nginx store but squid is still
having the video or whatever file in the cache_dir so the next time the
file will be downloaded from the cache.

if i'm wrong or misunderstood anything i will be glad to learn.(kind of
store_url_rewrite)

about facebook, they made a some changes to their systems and many
things are being cached by the browser.
i managed to cache the videos using nginx.

Eliezer
>
> Amos
Received on Wed Jan 04 2012 - 10:49:09 MST

This archive was generated by hypermail 2.2.0 : Tue Jan 24 2012 - 12:00:04 MST