Re: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

From: Adrian Chadd <adrian@dont-contact.us>
Date: Wed, 17 Oct 2007 14:28:44 +0800

On Wed, Oct 17, 2007, Michel Santos wrote:

> o really? how much would that be? do you have a number or is it just talk?

I benchmarked one unit for SLB and some basic caching at a previous job.
They could do more than Squid could on the P4 hardware I had at the time.
This was ~2 years ago now. Admittedly now epoll/kqueue is in and I've
dug my fingers in making things slightly faster and this may have helped
things a bit, but not a lot.

> I am not so sure if this 2400 req/sec wasn't per minute and also wasn't
> from cache but only incoming requests ...
>
> I pay you a beer or even two if you show me a "device" type pIII which can
> satisfy 2400 req from disk

http://polygraph.ircache.net/Results/cacheoff-2/polyrep/bench-114/

It certainly wouldn't be 2400 requests from disk, but its entirely within
the realm of reason to push and pull a hundred small object requests off
disk; even more if you can exploit temporal locality.

Please check http://polygraph.ircache.net/Results/cacheoff-2/ and
note the date. January 2000.

> well, not only well-understood but also well-known a Ferrari seems to run
> faster than the famous john-doo-mobile - but also very well-known the
> price issue and even if well-documented it makes no sense at all comparing
> both

> squid does a pretty good job not only getting high hit rates but
> especially considering the price
>
> unfortunatly squid is not a multi-threaded application what by the way
> does not disable you running several instances as workaround

That wont't change without resources.

> unfortunatly again, diskd is kind of orfaned but certainly is
> _the_kind_of_choice_ for SMP machines, by design and still more when
> running several diskd processes per squid process

AUFS was designed also for SMP machines. All the process is doing, really,
is doing blocking disk IO in another process context. Its not at all that
complicated. In reality AUFS should be faster than diskd but it hasn't
gotten much attention over the last few years.

> again unfortunatly, people are told that squid is not SMP capable and that
> there is no advantage of using SMP machines for it so they configuring
> their machines to death on single dies with 1 meg or 2 and getting nothing
> out of it so where does it end??? Easy answer, squid is going to be a
> proxy for natting corporate networks or poor ISPs which do not have
> address space - *BUT NOT* as a caching machine anymore

We can change that. Time and effort.

> but fortunatly true that caching performance is in first place a matter of
> fast hardware
>
> that you can see and not only read common bla-bla I add a well-known mrtg
> graph of the hit rate of a dual-opteron sitting in front of a 4MB/s ISP
> POP

Which is great. Share statistics!

> and I get pretty much more hits as you told at the beginning on larger
> POPs - so I do not know where you get your squid's 1000 req limit from ...
> must be from your P-III goody ;)

Nope; this is admittedly not on current generation hardware, but I've
had reports from people who say Squid gets unstable for them above 1000-1500
requests a second. The instability is generally "We're out of CPU, so we
just can't handle the load." Of course, the load Squid can handle is very
workload dependant.

But PC hardware can do a hell of a lot more.

> but then at the end the actual squid marketing is pretty bad, nobody talks
> caching but talks proxying, authenticating and acling, even the makers are
> not defending caching at all and appearently not friends of running squid
> as multi-instance application because any documentation about it is very
> poor and sad

There's no Squid marketing. There's plenty of companies who use Squid in
their own products and market -that-, but Squid as a project is just a group
of us who work on it when we can/when we get contracts. There's no Squid
organisation to do any marketing for; there's noone being paid to just
document whats already driven. Squid development these days is mostly
reactive to specific customer needs. I don't think any of the developers
at the present time are actively seeking funding for architectural and
documentation changes; well, besides me, but I haven't been very vocal
about it yet.

Don't think we aren't friendly to people running multiple copies of Squid
on a box. The people who do it just don't seem to have written documentation.
We're a contribution based project, after all.

> probably an answer to actual demands and so they go with the croud,
> bandwidth is almost everywhere very cheap so why people should spend their
> brains and bucks on caching technics. Unfortunatly my bandwidth is
> expensive and I am not interesting in proxying or and other feature so
> perhaps my situation and position is different and is not the same as
> elsewhere.

Would you be interested in picking up a Squid support contract if/when
my company is offering them publicly; given a product roadmap, faster
support turnaround and not-for-public addons? Considering how much you
pay for bandwidth and how much you're probably saving by using Squid,
and some of those features I'm playing with/want to play with could
save people even more bandwidth (specifically caching flash/updates.)

Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -
Received on Wed Oct 17 2007 - 00:27:43 MDT

This archive was generated by hypermail pre-2.1.9 : Thu Nov 01 2007 - 13:00:01 MDT