Re: Device utilization with RAID-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 18, 2011 at 5:42 AM, NeilBrown <neilb@xxxxxxx> wrote:
> On Thu, 18 Aug 2011 02:26:17 +0200 Harald Nikolisin <hochglanz@xxxxxxxxx>
> wrote:
>
>> hi,
>>
>> I didn't want to complain in general about SW RAID-1 performance. I
>> simply think something is wrong with my setup and I have currently no
>> idea how to improve.
>>
>> The basic questions (where I did not find an answer, neither in FAQ's
>> nor in forum discussions) are.
>> a) Is it normal that the hard drives show an permanent utilization
>> (around 20%) without any noticeable actions on the computer?
>
> No.  If the array is resyncing or recovering then you would expect
> utilization for as many hours as it takes - but that would show
> in /proc/mdstat.
>
>> b) Should (as long as no resync happens) the state of mdadm active or clean?
>
> If anything has been written to the device in the last 200msec (including
> e.g. access time updates) then expect it to be 'active'.
> If nothing has been written for 200msecc or more, then expect it to be clean.
>
> If you crash while it is active, a resync is needed.
> If you crash while it is clean, no resync is needed.
> If you don't crash at all .... that is best :-)

I think this info should be wikified if not yet.

btw, I've experimented a bit on my /boot array (it doesn't being
updated, checked with iostat ), and:
root@m2:~# for i in {1..5};do mdname="md0"; echo "iteration $i";
(mdadm --detail /dev/$mdname|grep 'State ';cat
/sys/block/$mdname/md/array_state;grep "$mdname :" /proc/mdstat);sleep
1;done
iteration 1
          State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 2
          State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 3
          State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 4
          State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 5
          State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]

so, mdadm --detail & array_state shows array is "clean", while
/proc/mdstat shows array is "active" (no reads/writes happen).

Some value is lieing or being misunderdstanded by me...

>
>
>
>>
>> cheers,
>>   harald
>>
>> well, I have only 2 hard drives and no space for more..
>>
>> Am 16.08.2011 03:29, schrieb Roberto Spadim:
>> > try raid10 far layout
>> >
>> > 2011/8/15 Harald Nikolisin <hochglanz@xxxxxxxxx
>> > <mailto:hochglanz@xxxxxxxxx>>
>> >
>> >     Since a long time I'm unhappy with the performance of my RAID-1 system.
>> >     Investigation with atop and iostat unveils that the disk utilization is
>> >     always on a certain level although nothing happens on the system. In the
>> >     case of reading or writing files the utilization boosts always to 100%
>> >     for a long time. Very ugly examples are "Firefox starting" or "zypper
>> >     updates".
>> >     That is snapshot of the output of iostat:
>> >
>> >
>> >     Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s
>> >     avgrq-sz avgqu-sz   await  svctm  %util
>> >     sda               0,00     0,00    0,00    7,33     0,00    43,33
>> >     5,91     0,33   43,18  33,32  24,43
>> >     sdb               0,00     0,00    0,00    7,33     0,00    43,33
>> >     5,91     0,35   45,59  39,73  29,13
>> >     md0               0,00     0,00    0,00    0,67     0,00     5,33
>> >     8,00     0,00    0,00   0,00   0,00
>> >     md1               0,00     0,00    0,00    0,33     0,00     5,33
>> >     16,00     0,00    0,00   0,00   0,00
>> >     md2               0,00     0,00    0,00    0,33     0,00     1,00
>> >     3,00     0,00    0,00   0,00   0,00
>> >     md3               0,00     0,00    0,00    0,00     0,00     0,00
>> >     0,00     0,00    0,00   0,00   0,00
>> >     md4               0,00     0,00    0,00    0,00     0,00     0,00
>> >     0,00     0,00    0,00   0,00   0,00
>> >     md5               0,00     0,00    0,00    0,33     0,00     0,67
>> >     2,00     0,00    0,00   0,00   0,00
>> >
>> >     I checked with mdadm if a resync happens or so, but this is not the
>> >     case. The state says "active" on all RAID devices - btw. what is the
>> >     difference to "clean" ?
>> >
>> >     thanks for any hints,
>> >      harald
>> >     --
>> >     To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >     the body of a message to majordomo@xxxxxxxxxxxxxxx
>> >     <mailto:majordomo@xxxxxxxxxxxxxxx>
>> >     More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> >
>> >
>> >
>> > --
>> > Roberto Spadim
>> > Spadim Technology / SPAEmpresarial
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux