Re: q: RAID1 with very unbalanced disk performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 6, 2010 at 8:28 AM, Olaf Zevenboom <olaf@xxxxxxxxxxx> wrote:
> Dear List,
>
> We have a setup with two SATA2 1.5 TB harddisks in MD/LVM2 RAID1 setup on
> Debian Lenny stock kernel.
> As it is suffering from performance troubles I did take a closer look and
> noticed that /dev/sda is way more intensively used than /dev/sdb. I
> monitored the drives with various tools including atop.
> /dev/sda is always a bit busier than /dev/sdb. This behavior can also be
> seen on other systems with a similar setup (Etch and Lenny), but on this
> particular system /dev/sda is about 10% more intensively used. Although I
> think that is quite a lot and not as it should be I can live with that. What
> worries me more is that /dev/sda can peek upto 100% disk-utilization causing
> the system to be temporary unresponsive whilest /dev/sdb does not seem to
> peek over 20% or so.
> Any pointers on what is happening here and/or how I can resolve this issue
> are quite welcome.
>
> Thanking you in advance,
> Olaf Zevenboom
>
> Details:
>
> Running Debian Lenny with stock kernel 2.6.26-2-amd64 #1 SMP
> LVM2 on top of MD
>
> 2 SATA2 1.5tb disks: lsscsi
> [0:0:0:0]    disk    ATA      WDC WD15EARS-00Z 80.0  /dev/sda
> [1:0:0:0]    disk    ATA      ST31500541AS     CC32  /dev/sdb
>
> cat /proc/mdstat
> Personalities : [raid1]
> md1 : active raid1 sda2[0] sdb2[1]
>     1464886912 blocks [2/2] [UU]
>    md0 : active raid1 sda1[0] sdb1[1]
>     248896 blocks [2/2] [UU]
>    unused devices: <none>
>
> hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads:   2890 MB in  2.00 seconds = 1445.36 MB/sec
> Timing buffered disk reads:  300 MB in  3.01 seconds =  99.67 MB/sec
>
> hdparm -tT /dev/sdb
>
> /dev/sdb:
> Timing cached reads:   7482 MB in  2.00 seconds = 3743.83 MB/sec
> Timing buffered disk reads:  302 MB in  3.02 seconds = 100.00 MB/sec
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

I suspect it's not 'sda' as much as it is the WD15EARS itself that's making sda.

I purchased 6 WD10EARS drives and they simply didn't work for RAID,
but even in single drive situations I see stuff going on with them I
don't understand. One clue is that smartctl tells me the
LOAD_CYCLE_COUNT is increasing at a rate of 30/hour and I haven't
found out why. At that rate the drive is out of spec in 18 months, not
3 years, and I've seen this on two different machines. I saw problems
in dmesg and /var/log/messages about reads and writes being blocked
and once in awhile a kernel warning with trace back about a drive
timing out.

On the other hand I replaced the 1TB WD10EARS with a Raid Edition
500GB WD drive - WD5002ABYS - and I see none of these problems. My
RAID1 setup is working great. I have no error or warning messages
anywhere that I can find. Granted, half the storage for $10 more per
drive, but it WORKS!

Just my observations so far.

- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux