Search squid archive

Re: Re: SMP vs Single Process Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/03/2013 11:06 p.m., babajaga wrote:
Some aspects:
- You are only using aufs. I consider it good for larger objects, but
regarding small ones (<=32kb), rock should be faster. So I suggest some
splitting of cache_dirs based on obj size.
- Be careful when setting up the filesystem for your cache_dirs on disk. I
made the experience, that this will have a huge impact on performance. I
consider HDDs reliable, so I take the risk of losing some cache content in
case of diskfailure (which happend very seldom to me) and use an ext4-fs,
stripped down to the very basics (no journal, timestamps etc.).
-AFAIK, SMP does not do shared aufs. That means, in your config you take the
risk of having the same file cached multiple times, in different cache_dirs.
So you might consider having multiple workers for rock-dir, but only one for
the larger stuff, stored using one single HUGE aufs. However, will need
configure opt
'--enable-async-io=128'

Maybe yes, maybe no. Your mileage using it *will* vary a lot.

* Querying just one cache_dir is no faster or slower than querying multiple, since they all use a memory index. * Remember that Squid UFS filesystem has maximum of 2^27 or so objects per cache_dir, single huge TB dir cannot hold more count of ojects than a tiny MB one. You *will* need to setup a high min object size limit on the cache_dir line to fill a very big cache - with other cache_dir for the smaller objects. * If you are using RAID to achieve the large disk size, it is not worth it. Squid performs an equivalent to RAID spreading objects across directories on its own and the extra RAID operations are just a drag on performance, no matter how small. see the wiki FAQ page n RAID for more details on that. * and finally, you also may not need such a huge disk cache, may not be able to use one due to RAM limits on the worker - that memory index uses ~1MB RAM per GB of _total_ disk space across all cache_dir on the worker.


- To smooth the access from clients, you might consider using delay-pools,
to limit the risk of some bad guys sucking your bandwidth by having an upper
limit on download spead.

Yes and no. This caps the usage through Squid but operating system QoS controls work a lot better than Squid delay pools and can account for non-HTTP traffic in their calculation of who is hogging bandwidth. They can also re-adjust the allowances far more dynamically for changing traffic flows.

/2c
Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux