Re: Logfile analysis scripts

From: Peter Childs <pjchilds@dont-contact.us>
Date: Wed, 19 Jun 1996 20:37:43 +0930 (CST)

In article <4q8li8$sbi@al.imforei.apana.org.au> you wrote:

: Duane Wessels wrote:

: > >: to give me some statistics on Client Usage and others. The only thing
: > >: that still doesn't work is the Top 25 listing.
: > >
: > > I find that initially this worked, then as the logs grew a little
: > > (we are only a small site) that perl started core dumping
: > > (perl 5.002 under FreeBSD 2.1-stable)

: Perl doesn't doump core for me (Linux 2.0) but the script will simply
: write just the Ttable headers to the report file. Perl doesn't need any
: special setup to use dbm databases, does it?
:
: > This is why the access-extract-urls.pl uses dbm databases for
: > the assoc arrays. It opens them in /var/tmp/...

: There are no files in /var/tmp while the script runs :( . Did you change
: the scripts since squid beta11 came out, cause that's when I got then.

 I've got some _very_ large files left in /tmp/var so I take it that
 the core dumps are happening then.....

 On a brighter note I've hacked some scripts together that produce a
 web page of ftp stuff in our cache... since we are a small
 cache site this is good for us :)

    http://www.sa.apana.org.au/alt/cache.html

 Also, all the files in the cache (proto == ftp) are now in our
 local ftp database too... that cache_object:// stuff is *really*
 neat.

 If anyone is interested in having the perl/c code I'm quite happy
 to post it here... you need to have it running on a machine with
 access to the cache_object stuff and thats about it...

 Peter

--
 Peter Childs  ---  http://www.imforei.apana.org.au/~pjchilds
  The internet is full, please try again in half an hour...
Received on Wed Jun 19 1996 - 04:08:48 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:31 MST