Re: [squid-users] About bottlenecks (Max number of connections, etc.)

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sun, 24 Feb 2013 17:10:37 +1300

On 23/02/2013 3:59 p.m., Manuel wrote:
> Hi,
>
> We are having problems with our Squid servers during traffic peaks. We had
> problems in the past and we got different error such as "Your cache is
> running out of filedescriptors", syncookies errors, etc. but nowadays we
> have optimized that and we are not getting those errors anymore. The problem
> is that the servers, which many of them are different in resources and in
> two different datacenters (which are all running squid as a reverse proxy
> caching contents from several webservers in other datacenters), during big
> traffic peaks all of them fail to deliver content (html pages, js files and
> css files gziped and non gziped as well as images) and we do not see any
> error at all. The more connections/requests, the highest is the percentage
> of clients that fails to get the content. So we are tring to find out where
> is the bottleneck. Is Squid unable to deal with more than X connections per
> second or any other bottleneck? I think the bottleneck starts to fail when
> there is around 20,000 connections to each server.

What does squid -v say for you?

Squid does have a total service capacity which is somewhere around 20K
as the result of a large number of cumulative little details.

  * The limit on number of connections any Squid can have attached is
only limited by your configured FD limits and available server RAM.
Squid uses ~64 KB per network socket for traffic state - which equates
to around 2 GB of RAM just for I/O buffers at 20,000 concurrent client
connections.

* The limit on how fast each client can be served data is bottlenecked
by both TCP network stack and CPU speeds. Look up "Buffer Bloat" for
things which affect Squid operation from the TCP direction.

* Squid parser is not very efficient yet. Which can bottleneck request
handling speeds. In the developer test machine (which is a slow
single-threaded server with ~1.2 GHz processor) Squid-3.2 achieves
around 950 req/sec. On common ISP hardware which have much faster CPU
capacity Squid is known to reach over twice that (~2.5K req/sec).
  - at 20K concurrent connections this means each connection only
sending one request every 4-8 seconds.

Of course the squid developers all would like Squid to be as fast as
possible (my personal aim is to break the 1K req/sec barrier on that
1.2GHz machine) and there is a lot of ongoing effort to improve the
speed all over Squid. But once you are reaching the above types of limit
there is no magic setting that can gain big %-points more speed. If you
want to participate in the improvements please upgrade to the latest
Squid available (3.3 daily snapshot [stable] or 3.HEAD packages
[experimental]) and profiling anything and everything you suspect might
improve performance. Patch contributions to squid-dev are very welcome,
discussions highlighting what needs updating almost equally so.

Amos
Received on Sun Feb 24 2013 - 04:10:42 MST

This archive was generated by hypermail 2.2.0 : Mon Feb 25 2013 - 12:00:04 MST