Search squid archive

Re: Cache_dir more than 10GB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi,
>
> I reviewed Squid filemap code and it's clear that in some cases
> a large cache will have high CPU load.
> filemap.c function file_map_create function starts with 2^13 elements
> and expands element number only after the list is full.
> So for example if cached objects are always slightly below 2^23 and bitmap
> size is 2^23  it will take a lot of CPU to find next free bit.
>
> Itzcak

Just an idea...
But it looks possible at add a binary chop between the word-detection loop
and the bit-detection loop to chop the word and seed the 'bit' variable.
Thats a gain of n/2 tests immediately.

/2c
Amos

>
> On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
> <henrik@xxxxxxxxxxxxxxxxxxx> wrote:
>> On sön, 2008-10-05 at 16:38 +0200, Itzcak Pechtalt wrote:
>>> When Squid reach several millions of objects per cache dir, it start
>>> to be very CPU consumer, becuae every insertion and deletion of object
>>> takes long time.
>>
>> Mine don't.
>>
>>> On my Squid 80-100GB had the CPU consumption effect.
>>
>> That's a fairly small cache.
>>
>> The biggest cache I have been running was in the 1.5TB range, split over
>> a number of cache_dir, about 130GB each I think.
>>
>> But it is important you keep the number of objects per cache_dir well
>> below 2^24. Preferably not more than 2^23.
>>
>>
>> What I think is that you got bitten by something else than cache size..
>>
>> Regards
>> Henrik
>>
>



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux