Re: Where is the performance bottleneck?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Holger Kiehl (Holger.Kiehl@xxxxxx) wrote:

> There is however one difference, here I had set
> /sys/block/sd?/queue/nr_requests to 4096.

Well from that it looks like none of the queues get about 255
(hmm that's a round number....)

> avg-cpu:  %user   %nice    %sys %iowait   %idle
>            0.10    0.00   21.85   58.55   19.50

Fair amount of system time.

> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s 
> avgrq-sz avgqu-sz   await  svctm  %util

> sdf        11314.90   0.00 365.10  0.00 93440.00    0.00 46720.00     0.00  
> 255.93     1.92    5.26   2.74  99.98
> sdg        7973.20   0.00 257.20  0.00 65843.20    0.00 32921.60     0.00   
> 256.00     1.94    7.53   3.89 100.01

There seems to be quite a spread of read performance accross the drives
(pretty consistent accross the run); what makes sdg so much slower than
sdf (which seems to be the slowest and fastest drives respectively).
I guess if everyone was running at sdf's speed you would be pretty happy.

If you physically swap f and g does the performance follow the drive
or the letter?

Dave
--
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    | Running GNU/Linux on Alpha,68K| Happy  \ 
\ gro.gilbert @ treblig.org | MIPS,x86,ARM,SPARC,PPC & HPPA | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux