Re: [squid-users] squid deployment in 6Gbit network with tproxy as L2 bridge.

From: Pawel Mojski <pawcio_at_pawcio.net>
Date: Fri, 23 Aug 2013 14:46:16 +0200

Hi.

W dniu 2013-08-23 14:16, Eliezer Croitoru pisze:
> Hey,
>
> There are setups like this you have and you better make sure you have
> something that shows you the status of the proxies and the LB all the
> time so you can differentiate between the load times and boxes.
It's nothing production yet. It's still a PoC and need a lot of tuning
and performance check.
But to avoid a situation described by you I'm using a keepalive on
balancer and the boxes with tracking script which check box conditions.
With any error tracking script spliting the box out of the cluster.
>
> a 32 cores proxy can handle about 1.5 GB of traffic in cases that there
> is a very intensive filtering solution.
> With squid I didn't tested it yet.
>
> Can you share a bit about your LB CPU load and setup?
>
>
Like I sad. Noting production and I'm still not sure how it will looks
like at the end.
My configuration is very hard to describe, but, for now, I have balancer
with 3 optic 10Gig cards and one 1Gbit (for management).
2 of 10gigs are used as in<->out as the bridge interface and the 3rd is
connected to the switch. (1GbE are used for MGMT).
16 GbE switch interfaces are connected to the 8 squid boxes (2 per each).

On balancer I have then eth0 and eth1 (bridged) and eth2 (to swtich)
(eth0 is internet side, eth1 is "LAN") side.
On eth2 I made 16 vlans witch are connected as access port on the switch
to each box ethernet adapter.
For box1 I have vlan11 and 12, for box2 21 and 22 and so on for each box.
Vlan X1 is used to handle the traffic between internet and the box, vlan
X2 between box end the LAN network.
Each vlan have own /30 subnet.
For example vlan11 is 10.0.0.0/30, vlan 12 is 10.0.0.4/30, vlan 21 is
10.0.0.8/30 etc. where .1 is the balancer and .2 is the scanner.
The next thing was to make proper ip rule tables.
Ofcourse I had to create a 16 tables.
So, table 11 is table to route the traffic from the internet to the lan,
table 12 from the internet to the scanner.
So, table 11 have:
localnet/8 via 10.0.0.2
table 12 have
0/0 via 10.0.0.6

Then, I'm using iptables mangle table.
All "NEW" packets becoming from eth1 witch dsport 80 are handled by the
mangle rule with -m statistic... to MARK the traffic in proper rule. 8
cycles, packet 0 to rule 12, packet 1 to rule 22, packet 3 to rule 32
and so on.
Then scanner (scanner1 in this example) one simple routing table.
localnet/16 via 10.0.0.5
0/0 via 10.0.0.1

and its tproxy that.
Traffic is going back on vlan11 on the scanner and its routed to the
internet.
The "back" traffic from internet is "restore-mark" on the balancer and
its forwarded to the proper rule (table 11 in this sample).

I hope you understood any of it :)

So, at the end of the day balancert have no balancing software
installed, everything is done on the kernel-space. And each scanner
received only part of the traffic so no problems with ip stock limits.

Regards;
Pawel Mojski
Received on Fri Aug 23 2013 - 12:46:26 MDT

This archive was generated by hypermail 2.2.0 : Tue Aug 27 2013 - 12:00:26 MDT