On Wed, Jun 10, 2015 at 10:39 PM, Markus Stockhausen <stockhausen@xxxxxxxxxxx> wrote: >> Von: linux-raid-owner@xxxxxxxxxxxxxxx [linux-raid-owner@xxxxxxxxxxxxxxx]" im Auftrag von "Roman Mamedov [rm@xxxxxxxxxxx] >> Gesendet: Donnerstag, 11. Juni 2015 02:27 >> An: Ming Lin >> Cc: linux-raid@xxxxxxxxxxxxxxx; Neil Brown >> Betreff: Re: write performance of HW RAID VS MD RAID >> >> On Wed, 10 Jun 2015 15:27:07 -0700 >> Ming Lin <mlin@xxxxxxxxxx> wrote: >> >> > Hi NeilBrown, >> > >> > As you may already see, I run a lot of tests with 10 HDDs for the patchset >> > "simplify block layer based on immutable biovecs" >> > >> > Here is the summary. >> > http://minggr.net/pub/20150608/fio_results/summary.log >> > >> > MD RAID6 read performance is OK. >> > But write performance is much lower than HW RAID6. >> > >> > Is it a known issue? >> >> Did you tune the stripe_cache_size for the array? Try 32768. >> https://peterkieser.com/2009/11/29/raid-mdraid-stripe_cache_size-vs-write-transfer/ > > +1 for giving an increased cache size a try. Will try it. > > From the numbers I anticipate that you are doing sequential > read/write tests. Otherwise I would expect a write penalty for > the HW RAID setup too. Yes, [global] ioengine=libaio iodepth=64 direct=1 runtime=1800 time_based group_reporting numjobs=48 gtod_reduce=0 norandommap write_iops_log=fs [job1] bs=640K directory=/mnt size=5G rw=write -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html