Re: write performance of HW RAID VS MD RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 10 Jun 2015 15:27:07 -0700
Ming Lin <mlin@xxxxxxxxxx> wrote:

> Hi NeilBrown,
> 
> As you may already see, I run a lot of tests with 10 HDDs for the patchset
> "simplify block layer based on immutable biovecs"
> 
> Here is the summary.
> http://minggr.net/pub/20150608/fio_results/summary.log
> 
> MD RAID6 read performance is OK.
> But write performance is much lower than HW RAID6.
> 
> Is it a known issue?

It is not unexpected.
There are two likely reasons.
One is that HW RAID cards often have on-board NVRAM which is used as a
write-behind cache.  This allows better throughput by hiding latency and more
often gathering full-stripe writes.  HW RAID cards may also have accelerators
for the parity calculations, but that is not likely to make a big difference.
What sort of RAID6 controller do you have?

The other is that it is not easy for MD/RAID6 to schedule writes stripes
optimally.  It doesn't really know if more writes are coming, so it should
wait, or if it already has everything - so it should get to work straight away.
It is possible that it could reply to writes as soon as they are in the
(volatile) cache and only force things to storage when a REQ_FUA or REQ_FLUSH
arrives.  That might help ... or it might corrupt filesystems :-(

As long as the patches don't make things obviously worse, I'm happy.

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux