Search squid archive

Re: Fwd: Performance tuning of SMP + Large rock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 13, 2014 at 3:02 PM, Alex Rousskov
<rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
> On 02/13/2014 03:01 PM, Rajiv Desai wrote:
>
>> When using LargeRock, what doe the io pattern correspond to? 16KB
>> random reads if slot size is 16KB?
>
> I am not 100% sure the reads are always 16KB. Squid may request less if
> Squid knows that the object ends sooner than the slot. You can check
> using strace or some such.
>
> I am also curious why you seem to be ignoring disk writes. In general,
> there are more disk writes than disk reads in a forward proxy
> environment. And it is disk writes that create the most problems for Squid.
>
>
>> Also, can I increase the slot size to a higher value at the expense
>> lower granularity?
>
> You can increase the slot size up to 32KB. IIRC, you cannot increase it
> higher without adjusting a hard-coded value in Squid, because the shared
> memory pages are hard-coded to be 32KB in size and worker/disker
> communication is done using those pages.

I increased the slot size for a fresh cache with:
cache_dir rock /mnt/squid-cache 204800 max-size=4194304 slot-size=32768

How to I confirm that the slot size I have configured for is being used?
Any logs or squidclient stats that will confirm that?

>
> Please make sure you actually need to increase it. It is possible that
> with 80KB mean object size, 80% of your objects are smaller than 10KB so
> increasing slot size may hurt rather than help...
>

This is a controlled dataset where objects are 50 KB to 100 KB in size
with a mean of 80 KB.

>
>>>> cache_mem 1024 MB
>>>>
>>>> with 4 workers:
>>>> workers 4
>>>>
>>>> I am running squid on a VM with 8 vCPUs(reserved cpu) and 8 GB RAM
>>>> (reserved). It does not seem to be bottlenecked by cpu or memory
>>>> looking at vmstat output.
>>>> I get a throughput of ~38MB/sec when all objects are read from cache
>>>> (with 64 outstanding parallel HTTPS reads at all times and avg object
>>>> size of 80 KB).
>>>
>>> The first question you have to ask is: What is the bottleneck in your
>>> environment? It sounds like you assume that it is the disk. Do you see
>>> disk often utilized above 90%? If yes, you are probably right. See the
>>> URL below for suggestions on how to measure and control disk utilization.
>>
>> The utilization is < 50% but unsure if that is because of async io.
>
> If the utilization is always so low, the bottleneck may be elsewhere.
>

When I perform a random read benchmarking test, iostat still shows
idle percentage > 50%.
Perhaps you are referring to a different utilization metric that I
should be looking at?

> If you do have disk writes in your workload, please make sure you do not
> look at average disk utilization over a long period of time (30 seconds
> or more?). With writes, you have to avoid utilization peaks because they
> block all processes, including Squid. If/when that happens, you can
> actually see Squid workers in D state using top or similar. The
> RockStore wiki page has more information about this complex stuff.
>

Disk writes occur on cache misses. When there high number of misses,
the wan bandwidth becomes a bottleneck with ~200 Mbps available
bandwidth so I am not too concerned about that.
The 1Gbps requirement is after I have completely primed the cache.
Hence I a more interested in read throughput from cache.

> Sorry, I do not know what you mean by "async io" in this context. Rock
> diskers use regular blocking disk I/O.
>
>
> Cheers,
>
> Alex.
>
>>> Rock store performance tuning suggestions are available at
>>> http://wiki.squid-cache.org/Features/RockStore
>




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux