Re: [squid-users] Mingw(patch for long file pointers) --with-large-files

From: chudy <chudy_Fernandez_at_yahoo.com>
Date: Wed, 20 Aug 2008 08:43:39 -0700 (PDT)

even using the 2.7 stable 4 version(binary for windows) with newly created
swap files still the same. i've been using storeurl and aufs feature since
from the squid head. now that im trying to use coss this warnings came up.

Henrik Nordstrom-5 wrote:
>
> sön 2008-08-17 klockan 20:41 -0700 skrev chudy:
>
>> one thing i've seeing Warnings about failed to unpack meta data that i've
>> never seen in aufs.
>
> Did you wipe your cache when changing the file size api?
>
> 32-bit and 64-bit caches may be incompatible..
>
> Regards
> Henrik
>
>
>

...or maybe storeurl is not final. bec. storeurl mismatch when the content
is store in memory and revalidated. but on the second thought no need to use
storeurl on smaller objects since speed is our concern. bec. this objects
usually give warnings about meta data are smaller objects. i've tried
storeurl_access deny smaller_content that are smaller than
maximum_object_size_in_memory it seems works fine.

but still i need confirmations.

on the other thought i've been thinking if the objects being canceled by the
clients, i want to continue downloading in squid but in lowest priority of
bandwidth... is it possible? or any workaround to make it happen?
quick_abort_max to -1 (correct me if i'm wrong) uses same bandwidth. it will
be total congestion if these files are videos. its really nice if it will be
on lowest priority and what makes ever better if the client retry to
download the priority back to normal.

-- 
View this message in context: http://www.nabble.com/Mingw%28patch-for-long-file-pointers%29---with-large-files-tp19025674p19070570.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Received on Wed Aug 20 2008 - 15:43:52 MDT

This archive was generated by hypermail 2.2.0 : Mon Aug 25 2008 - 12:00:06 MDT