Search squid archive

Re: rock issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Amos,

**i will use each rock dir with one physical disk , i m going to set up it now. + i will change to default rock dir optional values.

**Please note that i m switching to rock, since one processor won't hande 800 Mb/s traffic

**theoratically would squid with rock cache dir , give me the same gain ratio of ufs cache dir ? [for 100mb/s ufs gave me 70%]



**how much do u think of (worker+RAM) is needed for 800 Mb/s traffic?

thank u


From: squid-users <squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx> on behalf of Amos Jeffries <squid3@xxxxxxxxxxxxx>
Sent: Thursday, July 2, 2020 1:20 PM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx <squid-users@xxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [squid-users] rock issue
 
On 2/07/20 8:45 am, patrick mkhael wrote:
>
> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)
>  >workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>
>
>
> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]
>

In general yes. BUT the size ranges to use should be determined via
traffic analysis. Specifically measure and graph the object sizes being
handled. There will be a wavy / cyclic line resulting. The size
boundaries should be set to the *minimum* point(s) along that line.


That said. Since you are comparing the new rock to an old UFS setup. It
would be best to start with the rock being setup as similar to the UFS
as you can - number of cache_dir, ranges of objects stored there etc.

ie. if these ranges were in the old UFS, then keep them for now. That
can be re-calculated after the HIT ratio drop is identified.


> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]
>

Ouch.

Ideally you would have either:

 5x SSD disks separately mounted. With one rock cache on each.

or,

 1x RAID 10 with one rock cache per disk pair/stripe. This requires
ability for controller to map a sub-directory tree exclusively onto one
of the sub-array stripes.

or,

 2x RAID 1 (drive pair mirroring) with one rock cache on each pair. This
is the simplest way to achieve above when sub-array feature is not
available in RAID 10.

or,

 1x RAID 10 with a single rock cache



The reasons;

Squid I/O pattern is mostly writes and erase. Low on reads.

RAID types in order of best->worst for that pattern are:
  none, RAID 1, RAID 10, RAID 5, RAID 0
<https://wiki.squid-cache.org/SquidFaq/RAID>

Normal SSD controllers cannot handle the Squid I/O pattern well. Squid
*will* age the disk much faster than manufacturer measured statistics
indicate. (True for even HDD, just less of a problem there).

This means that the design needs to plan for coping with relatively
frequent disk failures. Loss of the data itself irrelevant. Only the
outage time + reduction in HIT ratio actually matter on failures.

Look for SSD with high write cycle measurements, and for RAID hot-swap
capability (even if the machine itself cant do that).



> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]
>

Alex may have better ideas if you can refer us to which tutorials or
documents you found that info in.

Without specific details on why they were chosen I would go with one
rock cache with default values to start with. Only changing them if
followup analysis indicates some other value is better.


Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux