Re: RAID 5 write performance advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday August 24, mirko.benz@xxxxxx wrote:
> Hello,
> 
> The RAID5 configuration is: 8 SATA disks, 8 port Marvel SATA PCI-X 
> controller chip (SuperMicro board), Dual Xeon, 1 GB RAM, stripe size 
> 64K, no spare disk.
> 
> Measurements are performed to the ram md device with:
> disktest -PT -T30 -h1 -K8 -B65536 -ID /dev/md0
> using the default stripe size (64K). 128K stripe size does not make a 
> real difference.

May I suggest you try creating a filesystem on the device and doing
tests in the filesystem?  I have found the raw device slower that
filesystem access before, and a quick test shows writing to the
filesystem (ext3) is about 4 times as fast as writing to /dev/md1 on a
6 drive raid5 array.

> 
> If Linux uses "stripe write" why is it so much slower than HW Raid? Is 
> it disabled by default?

No, it is never disabled.  However it can only work if raid5 gets a
full stripe of data before being asked to flush that data.  Writing to
/dev/md0 directly may cause flushes too often.


Does your application actually require writing to the raw device, or
will you be using a filesystem?  

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux