Re: Performance question, RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30 January 2011 00:15, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
> Roman Mamedov put forth on 1/29/2011 5:57 PM:
>> On Sat, 29 Jan 2011 23:44:01 +0000
>> Mathias BurÃn <mathias.buren@xxxxxxxxx> wrote:
>>
>>> Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
>>> Â SCSI storage controller: HighPoint Technologies, Inc. RocketRAID
>>> 230x 4 Port SATA-II Controller (rev 02)
>>> Â Â host6: [Empty]
>>> Â Â host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800964 }
>>> Â Â host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA1000331}
>>> Â Â host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800850 }
>>
>> Does this controller support PCI-E 2.0? I doubt it.
>> Does you Atom mainboard support PCI-E 2.0? I highly doubt it.
>> And if PCI-E 1.0/1.1 is used, these last 3 drives are limited to 250 MB/sec.
>> in total, which in reality will be closer to 200 MB/sec.
>>
>>> It's all SATA 3Gbs. OK, so from what you're saying I should see
>>> significantly better results on a better CPU? The HDDs should be able
>>> to push 80MB/s (read or write), and that should yield at least 5*80 =
>>> 400MB/s (-1 for parity) on easy (sequential?) reads.
>>
>> According to the hdparm benchmark, your CPU can not read faster than 640
>> MB/sec from _RAM_, and that's just plain easy linear data from a buffer. So it
>> is perhaps not promising with regard to whether you will get 400MB/sec reading
>> from RAID6 (with all the corresponding overheads) or not.
>
> It's also not promising given that 4 of his 6 drives are WDC-WD20EARS, which
> suck harder than a Dirt Devil at 240 volts, and the fact his 6 drives don't
> match. ÂSure, you say "Non matching drives are what software RAID is for right?"
> ÂWrong, if you want best performance.
>
> About the only things that might give you a decent boost at this point are some
> EXT4 mount options in /etc/fstab: Âdata=writeback,barrier=0
>
> The first eliminates strict write ordering. ÂThe second disables write barriers,
> so the drive's caches don't get flushed by Linux, and instead work as the
> firmware intends. ÂThe first of these is safe. ÂThe second may cause some
> additional data loss if writes are in flight when the power goes out or the
> kernel crashes. ÂI'd recommend adding both to fstab, reboot and run your tests.
> ÂPost the results here.
>
> If you have a decent UPS and auto shutdown software to down the system when the
> battery gets low during an outage, keep these settings if they yield
> substantially better performance.
>
> --
> Stan
>

Right. I wasn't using the writeback option. I won't disable barriers
as I've no UPS. I've seen the stripe= ext4 mount option, from
http://www.mjmwired.net/kernel/Documentation/filesystems/ext4.txt :

287	stripe=n		Number of filesystem blocks that mballoc will try
288				to use for allocation size and alignment. For RAID5/6
289				systems this should be the number of data
290				disks *  RAID chunk size in file system blocks.

I suppose in my case, number of data disks is 5, RAID chunk size is
64KB, file system block size is 4KB. This is on top of LVM, I don't
know how that affects the situation. So, the mount option would be
stripe=80? (5*64/4)

// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux