Search squid archive

RE: I see this error in cache.log file no free membufs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Markus and Amos,

I have done the changes you have proposed. I have dropped the max-size on COSS partition to 100KB so the COSS cache_dir line now reads as follows:

cache_dir coss /cache3/coss1 110000 max-size=102400 max-stripe-waste=32768 block-size=8192 membufs=100
cache_dir aufs /cache1 115000 16 256 min-size=102401
cache_dir aufs /cache2 115000 16 256 min-size=102401
cache_dir aufs /cache4/cache1 240000 16 256 min-size=102401

After doing this I have noticed the following warnings every now and then (usually every 1 - 2 hours) in the cache.log file

squidaio_queue_request: WARNING - Queue congestion

What I also noticed using iostat is that the big HDD with AUFS dir is handling a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid disk read and write access be somewhat equal between the 3 AUFS partitions? Do you think I should go for a higher max-size on the COSS partition to relieve the extra IO from the big AUFS cache_dir?

Thanks again for your excellent support.

Sincerely,

Ragheb Rustom
Smart Telecom S.A.R.L

-----Original Message-----
From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx] 
Sent: Thursday, July 21, 2011 2:00 AM
To: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  I see this error in cache.log file no free membufs

 On Wed, 20 Jul 2011 18:23:10 -0300, Marcus Kool wrote:
> The message indicates that the numbers of membufs should be
> because there are insufficent membufs to use for caching
> objects.  The reason for having 'insufficient membufs'
> is explained below.
>
> Given the fact that the average object size is 13 KB, the given
> configuration effectively puts a very large percentage of objects,
> most likely more than 90% in the COSS-based cache dir.  This puts
> a high pressure on (the disk with) COSS and I bet that the disk
> with COSS (/cache3) is 100% busy while the other three are mostly 
> idle.
>
> COSS is very good for small objects and AUFS is fine with larger 
> objects.
>
> There is one larger disk.  But this larger disk is not faster.
> It will perform worse with more objects on it than the other disks.
>
> To find out more about disk I/O and pressure on the disk with COSS, 
> one
> can evaluate the output of iostat or 'vmstat -d 5 5'
>
> I recommend to change the configuration, to utilise all disks in a
> more balanced way.  Be sure to also look at the output of iostat.
> My suggestion is to use COSS only for objects smaller than 64 KB.
> Depending on the average object size of your cache, this limit
> may be set lower.
>
> So I suggest:
>
> cache_dir coss /cache3 110000 max-size=65535 max-stripe-waste=32768
> block-size=8192 membufs=15
> cache_dir aufs /cache1 115000 16 256 min-size=65536
> cache_dir aufs /cache2 115000 16 256 min-size=65536
> cache_dir aufs /cache4 115000 16 256 min-size=65536
>
> And to observe the log and output of iostat.
> If the disk I/O is balanced and the message about membufs reappears 
> and
> you have sufficient free memory, you may increase membufs.  If the 
> I/O is
> not balanced, the limit of 64KB may be decreased to 16KB.
>
> Depending on the results and iostat, it may be better to
> have 2 disks with COSS and 2 disks with AUFS:
>
> cache_dir coss /cache3 110000 max-size=16383 max-stripe-waste=32768
> block-size=8192 membufs=15
> cache_dir aufs /cache1 110000 max-size=16383 max-stripe-waste=32768
> block-size=8192 membufs=15
> cache_dir aufs /cache2 115000 16 256 min-size=16384
> cache_dir aufs /cache4 115000 16 256 min-size=16384
>
> Marcus
>

 NP: use the cache manager "info" report to find the average object 
 size.
   squidclient mgr:info

 COSS handles things in 1MB slices. This is the main reason 
 max-size=1048575 is a bad idea, one object per file/slice is less 
 efficient than AUFS one object per file. So with 110GB of COSS dir will 
 be juggling a massive 110000 slices on and off of disk as things are 
 needed. I recommend using smaller COSS overall size and using the 
 remainder of each disk for AUFS storage of the larger objects. (COSS is 
 the exception to the one-dir-per-spindle guideline)

 Something like this with ~30GB COSS on each disk, double size on the 
 big disk = ~150GB of small objects:

 cache_dir coss /cache1coss 30000 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache1aufs 100000 16 256 min-size=65536

 cache_dir coss /cache2coss 30000 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache2aufs 100000 16 256 min-size=65536

 cache_dir coss /cache3coss 30000 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache3aufs 100000 16 256 min-size=65536

 cache_dir coss /cache4coss1 60000 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache4aufs 240000 16 256 min-size=65536

 This last one is a little tricky. You will need to test and see if its 
 is okay this big or needs reducing.

 On the size multiple dirs like this will means ~60MB of RAM consumed by 
 active COSS membufs instead of 15MB. You could bump membufs up to 20, 
 but changes like thse suggested by Marcus and myself above are needed to 
 make that worthwhile.

 Amos

>
> Ragheb Rustom wrote:
>> Dear All,
>> I have a squid cache proxy which is delivering content to around 
>> 3000+
>> users. After some problems with performance on AUFS under peak hour 
>> loads I
>> have used one of my cache_dirs as COSS disk following Amos Settings 
>> on
>> squid-cache website while left all others as AUFS that hold files 
>> bigger
>> than 1MB. After running perfectly for some time with coss and very 
>> beautiful
>> results I started seeing the below messages in my cache.log file:
>> storeCossCreateMemOnlyBuf: no free membufs.  You may need to 
>> increase the
>> value of membufs on the /cache3/coss1 cache_dir
>> here are my squid.conf settings:
>> cache_dir coss /cache3/coss1 110000 max-size=1048575 
>> max-stripe-waste=32768
>> block-size=8192 membufs=15
>> cache_dir aufs /cache1 115000 16 256 min-size=1048576
>> cache_dir aufs /cache2 115000 16 256 min-size=1048576
>> cache_dir aufs /cache4/cache1 240000 16 256 min-size=1048576
>> Please note all my Hdds are SAS 15k drives sizes as follows:
>> /cache1                                147GB
>> /cache2                                147GB
>> /cache3                                147GB
>> /cache4                                450GB
>> The system is dual Xeon quad core intel server with 16GB of physical 
>> Ram
>> Do you think I should increase the membufs value and what do you 
>> think the
>> best or optimal value for a such system should be?
>> Sincerely,
>>
>> Ragheb Rustom
>> Smart Telecom S.A.R.L
>>
>>
>>
>>
>>






[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux