[squid-users] Re: shared memory seems to allow size of 32K **1KB** segments (32MB)...

From: Linda W <squid-user_at_tlinx.org>
Date: Sun, 05 Aug 2012 15:37:37 -0700

Linda W wrote:
> Amos Jeffries wrote:
>> We are still limited to one page,
> ---
> 1 page or 1 segment/item?

I don't know who 'we' is... but on x86_64 linux, I was able to use the
perl SysV::IPC calls shmget/shmwrite/shmread/shmctl to allocate up to
my system's run-time limit (which I can up if needed) of 32MB/shm segment.
It looks like there is an underlying granularity of 8KB, but shouldn't
it be easy to simply use the shm interface and allocate exact size segments
to hold shared files?
Either that or allocate in largest size chunksize available and sub-divide it.
As for disk -- if the index of files that were stored on disk -- why
couldn't the processes share a file cache? Certainly you don't want
two separate processes downloading the same file at the same time -- that
would really hurt bandwidth...
Received on Sun Aug 05 2012 - 22:37:43 MDT

This archive was generated by hypermail 2.2.0 : Tue Aug 07 2012 - 12:00:01 MDT