Search squid archive

Re: Queue congestion at 60 req/sec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Changed to COSS, it's amazing now I have

avg-cpu:  %user   %nice    %sys %iowait   %idle
          1.01    0.00    0.00    0.00   98.99

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
ida/c0d0     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
   0.00     0.00    0.00   0.00   0.00
ida/c0d1     0.00   0.00  0.99  0.99   15.84    7.92     7.92     3.96
  12.00     0.02   11.00  11.00   2.18

vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
0  0    208  14944   6124 503996    0    0   104   112   70    53  1  1 87 10
0  0    208  14880   6124 503996    0    0     0     0 1533   362  2  1 98  0
0  0    208  14816   6140 503980    0    0    28  1072 1442   359  1  1 93  5
0  0    208  14816   6140 503980    0    0     0     0 1426   339  2  1 98  0
0  0    208  14816   6140 503980    0    0    16     0 1381   316  2  1 98  1
0  0    208  14816   6140 503980    0    0    24     0 1424   345  1  0 98  0
0  0    208  14816   6140 503980    0    0     0     0 1430   352  2  1 98  0

BTW, the disks are in a RAID Controller, but they are configured as
separate arrays, hence there's no duplexing of the IO.

Best Regards, Pablo



On 6/11/07, squid3@xxxxxxxxxxxxx <squid3@xxxxxxxxxxxxx> wrote:
> The objects being cached are 32K at most,
> this is the busiest line of iostat -x I got
>
> avg-cpu:  %user   %nice    %sys %iowait   %idle
>                   2.49     0.00      1.99    22.39   73.13
>
> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
> avgrq-sz avgqu-sz   await  svctm  %util
> ida/c0d0     0.00  35.64 14.85  3.96  182.18 1148.51    91.09   574.26
>    70.74     3.89   16.32   8.68  16.34
> ida/c0d1     0.00  58.42 36.63 10.89  435.64  570.30   217.82   285.15
>    21.17     0.81   16.56   6.98   33.17
>

Um with names like 'ida/c0d1' would that be a RAID array? That would kind
of explain higher IO as the RAID duplicated each store write.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux