Re: [PATCH] raid10: improve random reads performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 19, 2016 at 03:20:06PM -0700, Shaohua Li wrote:
> On Fri, Jun 24, 2016 at 02:20:16PM +0200, Tomasz Majchrzak wrote:
> > RAID10 random read performance is lower than expected due to excessive spinlock
> > utilisation which is required mostly for rebuild/resync. Simplify allow_barrier
> > as it's in IO path and encounters a lot of unnecessary congestion.
> > 
> > As lower_barrier just takes a lock in order to decrement a counter, convert
> > counter (nr_pending) into atomic variable and remove the spin lock. There is
> > also a congestion for wake_up (it uses lock internally) so call it only when
> > it's really needed. As wake_up is not called constantly anymore, ensure process
> > waiting to raise a barrier is notified when there are no more waiting IOs.
> > 
> > Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@xxxxxxxxx>
> 
> Patch looks good, applied. Do you have data how this improves the performance?
> 
> Thanks,
> Shaohua

I have tested it on a platform with 4 NVMe drives using fio random reads
feature. Before the patch RAID10 array has been achieved 234% of single drive
performance. With my patch the same array achieves 347% of single drive
performance. The best performance of 4 drives in compare to one drive in this
test could be 400% so it's around 30% boost.

Tomek
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux