Re: [squid-users] squid with multiple NICS

From: Edward D. Millington <edward@dont-contact.us>
Date: Mon, 27 Jan 2003 09:22:51 -0400

Minimun config as possible.

Here is my squid.conf

cache_mem 32 MB
memory_pools off
maximum_object_size 34 MB
maximum_object_size_in_memory 10 KB
cache_swap_low 96
cache_swap_high 98
ipcache_size 16384
ipcache_low 99
ipcache_high 100
shutdown_lifetime 0 second
connect_timeout 1 minutes
request_timeout 1 minutes
read_timeout 1 minutes
pconn_timeout 90 seconds
prefer_direct off
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl squid src 192.168.2.200/32
acl edward src 192.168.2.17/32
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow manager squid
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#
# Virus Access List
acl virus urlpath_regex "/usr/local/squid/etc/virus.txt"
acl no1 src 66.20.74.156/32
http_access deny no1
http_access deny virus
#
# END of Virus Access List

# Domain Access List
acl all src 0.0.0.0/0.0.0.0
acl LAN src 192.168.2.0/24 200.50.67.0/24 200.50.68.0/24 66.205.17.0/24
#acl domains dstdomain "/usr/local/squid/etc/domains.txt"
http_access allow LAN
#http_access deny !domains all
miss_access allow all
icp_access deny all
#END OF Domain Access List
#

cache_effective_user squid
cache_effective_group squid
#
#
wccp_router x.x.x.x
#
# DEBUG
#debug_options ALL,2
# END (DEBUG)
# LOGS
#
emulate_httpd_log off
logfile_rotate 1
cache_store_log none
#cache_access_log none
#cache_log none
# END (LOGS)
#
# STORAGE
coredump_dir /usr/local/squid/
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /usr/local/squid/var/cache 3200 16 128
cache_dir aufs /cache1 3200 16 128
cache_dir aufs /cache2 3200 16 128
cache_dir aufs /cache3 3200 16 128
cache_dir aufs /cache4 3200 16 128
cache_dir aufs /cache5 3200 16 128
cache_dir aufs /cache6 3200 16 128
cache_dir aufs /cache7 3200 16 128
cache_dir aufs /cache8 3200 16 128
# END (STORAGE)

./configure --enable-storeio=ufs,aufs --enable-removal-
policies=lru,heap --enable-delay-pools --enable-snmp --enable-cachemgr-
hostname=192.168.2.200 --enable-err-languages=English --enable-linux-
netfilter

000.03| Command: polyclt --config e:/polygraph/workloads/simple.pg --
verb_lvl 10
 --proxy 192.168.2.200:80 --log e:/clt.txt
000.03| Configuration:
        version: 2.7.6
        host_type: i386-unknown-W2K
        verb_lvl: 10
        dump: <none>
        dump_size: 1.000KB
        notify: <none>
        doorman_listen_at: <none>
        doorman_send_to: <none>
        label: <none>
        fd_limit: 11906
        config: e:/polygraph/workloads/simple.pg
        cfg_dirs:
        console: -
        log: e:/clt.txt
        log_buf_size: 64.000KB
        sample_log: e:/clt.txt
        sample_log_buf_size:64.000KB
        stats_cycle: 5.00sec
        file_scan: select
        priority_sched: 5
        fake_hosts:
        delete_old_addrs: yes
        idle_tout: <none>
        local_rng_seed: 1
        global_rng_seed: 1
        unique_world: on
        proxy: 192.168.2.200:80
        ports: <none>
        icp_tout: 2.00sec
        ign_false_hits: on
        ign_bad_cont_tags: off
        prn_false_misses: off
000.03| Server content distributions:
        Server S101:
                content planned% likely% error% mean_sz_bytes
           some-content 100.00 100.00 0.00 11277.53
        expected average server-side cachability: 95.00%
        expected average server-side object size: 11277.53Bytes

000.03| Phases:
     phase pop_beg pop_end load_beg load_end rec_beg rec_end
smsg_beg smsg_e
nd goal flags
      dflt 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.
00 <none>

000.03| StatsSamples:
        static stats samples: 0
        dynamic stats samples: 0

000.03| FDs: -1 out of 2147483647 FDs can be used; safeguard limit:
11906
000.03| resource usage:
        CPU Usage: 0usec sys + 0usec user = 0usec
        Page faults with physical i/o: 0

000.03| group-id: 07d3dc09.05910400:00000002 pid: 1024
000.03| current time: 1043670615.000936 or Mon, 27 Jan 2003 12:30:15 GMT
000.03| fyi: PGL configuration stored (663bytes)
000.03| fyi: no bench selected with use(); will not attempt to create
agent addr
esses
000.05| created R101 [1 / 07d3dc09.05910400:00000004] on 192.168.2.17
000.05| created R101 [2 / 07d3dc09.05910400:00000008] on 192.168.2.17
000.05| created R101 [3 / 07d3dc09.05910400:0000000a] on 192.168.2.17
000.05| created R101 [4 / 07d3dc09.05910400:0000000c] on 192.168.2.17
000.05| created R101 [5 / 07d3dc09.05910400:0000000e] on 192.168.2.17
000.05| created R101 [6 / 07d3dc09.05910400:00000010] on 192.168.2.17
000.05| created R101 [7 / 07d3dc09.05910400:00000012] on 192.168.2.17
000.05| created R101 [8 / 07d3dc09.05910400:00000014] on 192.168.2.17
000.05| created R101 [9 / 07d3dc09.05910400:00000016] on 192.168.2.17
000.05| created R101 [10 / 07d3dc09.05910400:00000018] on 192.168.2.17
000.05| created R101 [11 / 07d3dc09.05910400:0000001a] on 192.168.2.17
000.05| created R101 [12 / 07d3dc09.05910400:0000001c] on 192.168.2.17
000.05| created R101 [13 / 07d3dc09.05910400:0000001e] on 192.168.2.17
000.05| created R101 [14 / 07d3dc09.05910400:00000020] on 192.168.2.17
000.05| created R101 [15 / 07d3dc09.05910400:00000022] on 192.168.2.17
000.05| created R101 [16 / 07d3dc09.05910400:00000024] on 192.168.2.17
000.05| created R101 [17 / 07d3dc09.05910400:00000026] on 192.168.2.17
000.05| created R101 [18 / 07d3dc09.05910400:00000028] on 192.168.2.17
000.05| created R101 [19 / 07d3dc09.05910400:0000002a] on 192.168.2.17
000.05| created R101 [20 / 07d3dc09.05910400:0000002c] on 192.168.2.17
000.05| created R101 [21 / 07d3dc09.05910400:0000002e] on 192.168.2.17
000.05| created R101 [22 / 07d3dc09.05910400:00000030] on 192.168.2.17
000.05| created R101 [23 / 07d3dc09.05910400:00000032] on 192.168.2.17
000.05| created R101 [24 / 07d3dc09.05910400:00000034] on 192.168.2.17
000.05| created R101 [25 / 07d3dc09.05910400:00000036] on 192.168.2.17
000.05| created R101 [26 / 07d3dc09.05910400:00000038] on 192.168.2.17
000.05| created R101 [27 / 07d3dc09.05910400:0000003a] on 192.168.2.17
000.05| created R101 [28 / 07d3dc09.05910400:0000003c] on 192.168.2.17
000.05| created R101 [29 / 07d3dc09.05910400:0000003e] on 192.168.2.17
000.05| created R101 [30 / 07d3dc09.05910400:00000040] on 192.168.2.17
000.05| created R101 [31 / 07d3dc09.05910400:00000042] on 192.168.2.17
000.05| created R101 [32 / 07d3dc09.05910400:00000044] on 192.168.2.17
000.05| created R101 [33 / 07d3dc09.05910400:00000046] on 192.168.2.17
000.05| created R101 [34 / 07d3dc09.05910400:00000048] on 192.168.2.17
000.05| created R101 [35 / 07d3dc09.05910400:0000004a] on 192.168.2.17
000.05| created R101 [36 / 07d3dc09.05910400:0000004c] on 192.168.2.17
000.05| created R101 [37 / 07d3dc09.05910400:0000004e] on 192.168.2.17
000.05| created R101 [38 / 07d3dc09.05910400:00000050] on 192.168.2.17
000.05| created R101 [39 / 07d3dc09.05910400:00000052] on 192.168.2.17
000.05| created R101 [40 / 07d3dc09.05910400:00000054] on 192.168.2.17
000.05| created R101 [41 / 07d3dc09.05910400:00000056] on 192.168.2.17
000.05| created R101 [42 / 07d3dc09.05910400:00000058] on 192.168.2.17
000.05| created R101 [43 / 07d3dc09.05910400:0000005a] on 192.168.2.17
000.05| created R101 [44 / 07d3dc09.05910400:0000005c] on 192.168.2.17
000.05| created R101 [45 / 07d3dc09.05910400:0000005e] on 192.168.2.17
000.05| created R101 [46 / 07d3dc09.05910400:00000060] on 192.168.2.17
000.05| created R101 [47 / 07d3dc09.05910400:00000062] on 192.168.2.17
000.05| created R101 [48 / 07d3dc09.05910400:00000064] on 192.168.2.17
000.05| created R101 [49 / 07d3dc09.05910400:00000066] on 192.168.2.17
000.05| created R101 [50 / 07d3dc09.05910400:00000068] on 192.168.2.17
000.05| created R101 [51 / 07d3dc09.05910400:0000006a] on 192.168.2.17
000.05| created R101 [52 / 07d3dc09.05910400:0000006c] on 192.168.2.17
000.05| created R101 [53 / 07d3dc09.05910400:0000006e] on 192.168.2.17
000.05| created R101 [54 / 07d3dc09.05910400:00000070] on 192.168.2.17
000.05| created R101 [55 / 07d3dc09.05910400:00000072] on 192.168.2.17
000.05| created R101 [56 / 07d3dc09.05910400:00000074] on 192.168.2.17
000.05| created R101 [57 / 07d3dc09.05910400:00000076] on 192.168.2.17
000.05| created R101 [58 / 07d3dc09.05910400:00000078] on 192.168.2.17
000.05| created R101 [59 / 07d3dc09.05910400:0000007a] on 192.168.2.17
000.05| created R101 [60 / 07d3dc09.05910400:0000007c] on 192.168.2.17
000.05| created R101 [61 / 07d3dc09.05910400:0000007e] on 192.168.2.17
000.05| created R101 [62 / 07d3dc09.05910400:00000080] on 192.168.2.17
000.05| created R101 [63 / 07d3dc09.05910400:00000082] on 192.168.2.17
000.05| created R101 [64 / 07d3dc09.05910400:00000084] on 192.168.2.17
000.05| created R101 [65 / 07d3dc09.05910400:00000086] on 192.168.2.17
000.05| created R101 [66 / 07d3dc09.05910400:00000088] on 192.168.2.17
000.05| created R101 [67 / 07d3dc09.05910400:0000008a] on 192.168.2.17
000.05| created R101 [68 / 07d3dc09.05910400:0000008c] on 192.168.2.17
000.05| created R101 [69 / 07d3dc09.05910400:0000008e] on 192.168.2.17
000.05| created R101 [70 / 07d3dc09.05910400:00000090] on 192.168.2.17
000.05| created R101 [71 / 07d3dc09.05910400:00000092] on 192.168.2.17
000.05| created R101 [72 / 07d3dc09.05910400:00000094] on 192.168.2.17
000.05| created R101 [73 / 07d3dc09.05910400:00000096] on 192.168.2.17
000.05| created R101 [74 / 07d3dc09.05910400:00000098] on 192.168.2.17
000.05| created R101 [75 / 07d3dc09.05910400:0000009a] on 192.168.2.17
000.05| created R101 [76 / 07d3dc09.05910400:0000009c] on 192.168.2.17
000.05| created R101 [77 / 07d3dc09.05910400:0000009e] on 192.168.2.17
000.05| created R101 [78 / 07d3dc09.05910400:000000a0] on 192.168.2.17
000.05| created R101 [79 / 07d3dc09.05910400:000000a2] on 192.168.2.17
000.05| created R101 [80 / 07d3dc09.05910400:000000a4] on 192.168.2.17
000.05| created R101 [81 / 07d3dc09.05910400:000000a6] on 192.168.2.17
000.05| created R101 [82 / 07d3dc09.05910400:000000a8] on 192.168.2.17
000.05| created R101 [83 / 07d3dc09.05910400:000000aa] on 192.168.2.17
000.05| created R101 [84 / 07d3dc09.05910400:000000ac] on 192.168.2.17
000.05| created R101 [85 / 07d3dc09.05910400:000000ae] on 192.168.2.17
000.05| created R101 [86 / 07d3dc09.05910400:000000b0] on 192.168.2.17
000.05| created R101 [87 / 07d3dc09.05910400:000000b2] on 192.168.2.17
000.05| created R101 [88 / 07d3dc09.05910400:000000b4] on 192.168.2.17
000.05| created R101 [89 / 07d3dc09.05910400:000000b6] on 192.168.2.17
000.05| created R101 [90 / 07d3dc09.05910400:000000b8] on 192.168.2.17
000.05| created R101 [91 / 07d3dc09.05910400:000000ba] on 192.168.2.17
000.05| created R101 [92 / 07d3dc09.05910400:000000bc] on 192.168.2.17
000.05| created R101 [93 / 07d3dc09.05910400:000000be] on 192.168.2.17
000.05| created R101 [94 / 07d3dc09.05910400:000000c0] on 192.168.2.17
000.05| created R101 [95 / 07d3dc09.05910400:000000c2] on 192.168.2.17
000.05| created R101 [96 / 07d3dc09.05910400:000000c4] on 192.168.2.17
000.05| created R101 [97 / 07d3dc09.05910400:000000c6] on 192.168.2.17
000.05| created R101 [98 / 07d3dc09.05910400:000000c8] on 192.168.2.17
000.05| created R101 [99 / 07d3dc09.05910400:000000ca] on 192.168.2.17
000.05| created R101 [100 / 07d3dc09.05910400:000000cc] on 192.168.2.17
000.05| created 100 agents total
000.05| fyi: current state (1) stored
000.05| fyi: working set size goal: first 1.00hour of the test
000.05| fyi: max local population size: 100 robots
000.05| fyi: server scan completed with all local robots ready to hit
all 1 visi
ble servers
000.05| fyi: reached max local population size: 100 robots
000.13| i-dflt 513 102.60 953 59.26 0 100
000.22| i-dflt 1470 191.40 520 88.40 0 100
000.30| i-dflt 2612 228.40 440 90.37 0 100
000.40| i-dflt 3692 180.03 476 88.43 0 100
000.42| Connection.cc:210: error: 1/1 (s10048) Only one usage of each
socket add
ress (protocol/network address/port)
is normally permitted.

000.42| error writing to 192.168.2.200:80 after 0 reads, 0 writes, 1
xacts
000.48| i-dflt 4697 201.19 516 89.65 1 100
000.57| i-dflt 5661 192.80 563 92.53 0 100
000.65| i-dflt 6556 179.00 566 90.95 0 100
000.73| i-dflt 7640 216.80 457 91.61 0 100
000.82| i-dflt 8554 182.80 555 88.84 0 100
000.92| i-dflt 9435 146.84 534 89.22 0 100
001.00| i-dflt 10432 199.40 609 89.87 0 100
001.08| i-dflt 11208 155.20 674 90.72 0 100
001.17| i-dflt 11890 136.40 697 89.44 0 100
001.25| i-dflt 12798 181.60 573 82.82 0 100
001.33| i-dflt 13844 209.19 481 87.48 0 100
001.42| i-dflt 14843 199.80 495 84.98 0 100
001.50| i-dflt 15860 203.40 499 85.74 0 100

Thank you very much.

Best regards

Edward Millington
BSc, Network+, I-Net+, CIW Associate
Systems Administrator, Sr
Cariaccess Communications Ltd.
Palm Plaza
Wildey
St. Michael
Barbados

Phone:  1 246 430 7435
Mobile: 1 246 234 6278
Fax:    1 246 431 0170

edward@cariaccess.com
www.cariaccess.com

-----Original Message-----
From: "HBK" <hz@magic.net.pk>
To: "Hegedus,Ervin" <airween@amit.hu>, Edward Millington
<edward@cariaccess.com>
Cc: squid-users@squid-cache.org
Date: Mon, 27 Jan 2003 17:45:23 +0500
Subject: Re: [squid-users] squid with multiple NICS

> Hi
>
> I'm also using squid 2.5 stable 1 on Redhat Lilnux 8, I would like to
> know
> what the best configuration for squid machine, and how much load can
> squid
> handle
>
> My server configuration is
> Intel server board SAI2
> P III 1.26 GHz
> 1 GB RAM
> 1 36 GB SCSI
>
> thanks
>
>
> ---------- Original Message -----------
> From: "Hegedus, Ervin" <airween@amit.hu>
> To: Edward Millington <edward@cariaccess.com>
> Sent: Sun, 26 Jan 2003 14:09:16 +0100
> Subject: Re: [squid-users] squid with multiple NICS
>
> > Hello,
> >
> > > Have any one ever put together squid with multiple NICS?
> > yes, i had, but i don't use currently, just one.
> >
> > > In other words, under very stress enviroment, does it use all 2
> or more
> nic
> > > cards at the smae time?
> > i tryed it wit 2 NICs: a client side and an Internet side.
> > But the network admins doesn't like it... :)
> >
> > > Currently, I have been able to run my squid devel 3 over 160
> req/sec.
> > My Squid has ~800 req/sec with 1 NIC.
> >
> > Machine is a PIII-500 proc., with 512 MB RAM, and 2x9 GB SCSI
> > disks. OS is a FreeBSD 4.7, Squid is 2.5S1
> >
> > There is no problem...
> ------- End of Original Message -------
>
Received on Mon Jan 27 2003 - 06:23:05 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:12:53 MST