Search squid archive

Re: Re: What are recommended settings for optimal sharing of cache between SMP workers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I dug deeper into the original issue I reported where some objects
were not gettings cache HITs on subsequent reads.

It seems like there is a bug where an object overwrites previously
written object:
For a similar "multiple downloads on same data via squid" test, I
directed both store and cache logs to the same file store.log  for
debugging.

https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/3f6f2db9-9750047
is a GET request that I always get a MISS for.
Seems like the cached object is overwritten by a subsequent GET for :
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/67f1ae81-9773953

Debugging steps:

1. I extracted the store log entry for it with:

<logs>
~$ sudo grep "GET
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/3f6f2db9-9750047";
/mnt/squid-cache/store.log | grep SWAPOUT
1392764050.480 SWAPOUT 00 002DA320 A1B99D1CCFD79B73C57554BBDFDB2D89
200 1392764051 1392616446 1400540051 application/octet-stream
40384/40384 GET
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/3f6f2db9-9750047
</logs>

2. Now I looked at all store log and cache log activity for filenum
002DA320 where this object was stored according to the log above
Seems like a different object was mapped to the same filenum and
overwrote the previous object!

<logs>
$ sudo grep 002DA320 /mnt/squid-cache/store.log
2014/02/18 14:54:10.311 kid6| rock/RockSwapDir.cc(628) createStoreIO:
dir 0 created new filen 002DA320 starting at 49002594304
2014/02/18 14:54:10.480 kid6| store_swapout.cc(338)
storeSwapOutFileClosed: storeSwapOutFileClosed: SwapOut complete:
'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/3f6f2db9-9750047'
to 0, 002DA320
1392764050.480 SWAPOUT 00 002DA320 A1B99D1CCFD79B73C57554BBDFDB2D89
200 1392764051 1392616446 1400540051 application/octet-stream
40384/40384 GET
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/3f6f2db9-9750047
2014/02/18 14:54:10.480 kid6| store_dir.cc(341) storeDirSwapLog:
storeDirSwapLog: SWAP_LOG_ADD A1B99D1CCFD79B73C57554BBDFDB2D89 0
002DA320
2014/02/18 14:54:48.494 kid7| rock/RockSwapDir.cc(628) createStoreIO:
dir 0 created new filen 002DA320 starting at 49002594304
2014/02/18 14:54:48.613 kid7| store_swapout.cc(338)
storeSwapOutFileClosed: storeSwapOutFileClosed: SwapOut complete:
'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/67f1ae81-9773953'
to 0, 002DA320
1392764088.613 SWAPOUT 00 002DA320 26ED76867BB265254E7E39EE5C01BA9E
200 1392764089 1392616475 1400540089 application/octet-stream
25040/25040 GET
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/67f1ae81-9773953
2014/02/18 14:54:48.613 kid7| store_dir.cc(341) storeDirSwapLog:
storeDirSwapLog: SWAP_LOG_ADD 26ED76867BB265254E7E39EE5C01BA9E 0
002DA320
</logs>

Looks like a collision bug?

On Tue, Feb 18, 2014 at 6:15 AM, Dr.x <ahmed.zaeem@xxxxxxxxxxxx> wrote:
> Amos Jeffries-2 wrote
>> On 19/02/2014 12:12 a.m., Dr.x wrote:
>>> im doubting ,
>>> without smp with same traffic and same users i can save 40Mbps
>>>
>>> but in smp with combination of aufs with rock (32KB max obj size)
>>> i can only save 20Mbps
>>>
>>>
>>> im wondering does large rock will  heal me ?
>>>
>>
>> How many Squid processes are you currently needing to service those
>> users traffic?
>>
>> If the number is >1 then the answer is probably yes.
>>
>>  * Each worker should have same HIT ratio from AUFS cached objects. Then
>> the shared Rock storage should increase HIT ratio some for workers which
>> would not normally see those small objects.
>>
>>
>>> or return to aufs and wait untill squid relase version that has bigger
>>> object size ?
>>>
>>> bw saving is a big issue to me and must be done !!!
>>>
>>
>> Your choice there.
>>
>> FYI: The upcoming Squid series with large-rock support is not planned to
>> be packaged for another 3-6 months.
>>
>> HTH
>> Amos
>
> hi amos ,
> i have about 900 req/sec , and i think i need 4 or 5 workers at maximum
> i have 24 cores ,
> from the old squid that was saving 40-45M i found mean object size
>   Mean Object Size:       *142.30 KB*
>
> i found that 142KB is close to 100KB ,
>
> i mean if i used large rock , will it enhace byte ratio !!!
> do agree with me ?
>
> now regardsing to use aufs with rock
>
> now i have 5 aufs hardsisk each has conf file and aufs dir and max object
> size
>
> now , wt is the best implementation of smp ?
>
> should i do if statements and map each worker with aufs process ?
>
> im not sure which is best
>
> sure u can give me advice to start ,
>
>
> also , can i use large rock now ?
> regards
>
>
>
>
> -----
> Dr.x
> --
> View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/What-are-recommended-settings-for-optimal-sharing-of-cache-between-SMP-workers-tp4664909p4664921.html
> Sent from the Squid - Users mailing list archive at Nabble.com.




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux