Search squid archive

Fwd: Performance tuning of SMP + Large rock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(resending to squid-users)

On Thu, Feb 13, 2014 at 12:19 PM, Alex Rousskov
<rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
> On 02/12/2014 08:04 PM, Rajiv Desai wrote:
>> Hi,
>>
>> I am using squid cache as a forward caching proxy.
>>
>> CONTEXT:
>>
>> For my use case since:
>> 1. the average object size is ~80KB (moreover > 32KB),
>> 2. the proxy server has multiple cores available
>> 3. the throughput requirement is high (upto 1Gbps)
>>
>> I have configured squid to use SMP + LargeRock. I am using Squid
>> Cache: Version 3.HEAD-20140127-r13248.
>>
>> I have configured the cache as:
>>
>> cache_dir rock /mnt/squid-cache 256000 max-size=4194304
>
> All Squid imperfections aside, it is rather unlikely that your single
> hard drive is fast enough to sustain disk loads implied by 1Gbps traffic
> (your requirement #3). Most likely, you will need to limit disk traffic
> (and, hence, sacrifice hit ratio) as discussed at the URL below.

I can add more disks though it will be useful to characterize the
throughput that can be achieved per disk.
When using LargeRock, what doe the io pattern correspond to? 16KB
random reads if slot size is 16KB?

I ran fio to check random read throughput with 16 KB slot size and it
matches the peak throughput I observed with iostat
when all reads were being served from squid cache.

<test>
fio --name=10G.data --rw=randread --fallocate=none --size=10G --bs=16K
--scramble_buffers=1 --nrfiles=1 --thread
</test>

<result>
Run status group 0 (all jobs):
   READ: io=10240MB, aggrb=59145KB/s, minb=60564KB/s, maxb=60564KB/s,
mint=177289msec, maxt=177289msec

Disk stats (read/write):
  sdc: ios=654691/0, merge=0/0, ticks=142144/0, in_queue=141576, util=79.99%
</result>

Also, can I increase the slot size to a higher value at the expense
lower granularity?

>
>
>> cache_mem 1024 MB
>>
>> with 4 workers:
>> workers 4
>>
>> I am running squid on a VM with 8 vCPUs(reserved cpu) and 8 GB RAM
>> (reserved). It does not seem to be bottlenecked by cpu or memory
>> looking at vmstat output.
>> I get a throughput of ~38MB/sec when all objects are read from cache
>> (with 64 outstanding parallel HTTPS reads at all times and avg object
>> size of 80 KB).
>
> The first question you have to ask is: What is the bottleneck in your
> environment? It sounds like you assume that it is the disk. Do you see
> disk often utilized above 90%? If yes, you are probably right. See the
> URL below for suggestions on how to measure and control disk utilization.

The utilization is < 50% but unsure if that is because of async io.

>
>
>> QUESTIONS:
>>
>> 1. I am currently using a xfs mount for my cache_dir.
>> /dev/sdc1 on /mnt/squid-cache type xfs
>> (rw,noatime,nodiratime,nobarrier,logbufs=8)
>>
>> What is the recommended filesystem for storing LargeRock database?
>
> The simpler it is, the higher is the probability that you will be able
> to understand what is going on and tune your fs accordingly. Ext2?
>

Will try it out and report results when I have them.

>
>> Also are there any recommended options that apply to LargeRock? (I
>> looked through the FAQ and the recommendations there seem to be mainly
>> for ufs/aufs/diskd which create many small files)
>>
>> 2. Are there any known throughput limits when using SMP + LargeRock?
>>
>> 3. Are there any recommending tuning options that specifically apply
>> to LargeRock which would help cache read throughput?
>
> Rock store performance tuning suggestions are available at
> http://wiki.squid-cache.org/Features/RockStore
>
> They apply to both Large Rock and Small Rock IIRC.
>
>

Thanks for the pointer.

> HTH,
>
> Alex.
>




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux