Search squid archive

Re: Fwd: Performance tuning of SMP + Large rock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/13/2014 03:01 PM, Rajiv Desai wrote:

> When using LargeRock, what doe the io pattern correspond to? 16KB
> random reads if slot size is 16KB?

I am not 100% sure the reads are always 16KB. Squid may request less if
Squid knows that the object ends sooner than the slot. You can check
using strace or some such.

I am also curious why you seem to be ignoring disk writes. In general,
there are more disk writes than disk reads in a forward proxy
environment. And it is disk writes that create the most problems for Squid.


> Also, can I increase the slot size to a higher value at the expense
> lower granularity?

You can increase the slot size up to 32KB. IIRC, you cannot increase it
higher without adjusting a hard-coded value in Squid, because the shared
memory pages are hard-coded to be 32KB in size and worker/disker
communication is done using those pages.

Please make sure you actually need to increase it. It is possible that
with 80KB mean object size, 80% of your objects are smaller than 10KB so
increasing slot size may hurt rather than help...


>>> cache_mem 1024 MB
>>>
>>> with 4 workers:
>>> workers 4
>>>
>>> I am running squid on a VM with 8 vCPUs(reserved cpu) and 8 GB RAM
>>> (reserved). It does not seem to be bottlenecked by cpu or memory
>>> looking at vmstat output.
>>> I get a throughput of ~38MB/sec when all objects are read from cache
>>> (with 64 outstanding parallel HTTPS reads at all times and avg object
>>> size of 80 KB).
>>
>> The first question you have to ask is: What is the bottleneck in your
>> environment? It sounds like you assume that it is the disk. Do you see
>> disk often utilized above 90%? If yes, you are probably right. See the
>> URL below for suggestions on how to measure and control disk utilization.
> 
> The utilization is < 50% but unsure if that is because of async io.

If the utilization is always so low, the bottleneck may be elsewhere.

If you do have disk writes in your workload, please make sure you do not
look at average disk utilization over a long period of time (30 seconds
or more?). With writes, you have to avoid utilization peaks because they
block all processes, including Squid. If/when that happens, you can
actually see Squid workers in D state using top or similar. The
RockStore wiki page has more information about this complex stuff.

Sorry, I do not know what you mean by "async io" in this context. Rock
diskers use regular blocking disk I/O.


Cheers,

Alex.

>> Rock store performance tuning suggestions are available at
>> http://wiki.squid-cache.org/Features/RockStore





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux