Re: [PATCH] blk-mq: plug request for shared sbitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 18, 2021 at 10:44:43AM +0100, John Garry wrote:
> On 14/05/2021 03:20, Ming Lei wrote:
> > In case of shared sbitmap, request won't be held in plug list any more
> > sine commit 32bc15afed04 ("blk-mq: Facilitate a shared sbitmap per
> > tagset"), this way makes request merge from flush plug list & batching
> > submission not possible, so cause performance regression.
> > 
> > Yanhui reports performance regression when running sequential IO
> > test(libaio, 16 jobs, 8 depth for each job) in VM, and the VM disk
> > is emulated with image stored on xfs/megaraid_sas.
> > 
> > Fix the issue by recovering original behavior to allow to hold request
> > in plug list.
> 
> Hi Ming,
> 
> Since testing v5.13-rc2, I noticed that this patch made the hang I was
> seeing disappear:
> https://lore.kernel.org/linux-scsi/3d72d64d-314f-9d34-e039-7e508b2abe1b@xxxxxxxxxx/
> 
> I don't think that problem is solved, though.

This kind of hang or lockup is usually related with cpu utilization, and
this patch may reduce cpu utilization in submission context.

> 
> So I wonder about throughput performance (I had hoped to test before it was
> merged). I only have 6x SAS SSDs at hand, but I see some significant changes
> (good and bad) for mq-deadline for hisi_sas:
> Before 620K (read), 300K IOPs (randread)
> After 430K (read), 460-490K IOPs (randread)

'Before 620K' could be caused by busy queue when batching submission isn't
applied, so merge chance is increased. This patch applies batching
submission, so queue becomes not busy enough.

BTW, what is the queue depth of sdev and can_queue of shost for your hisilision SAS?
 
> 
> none IO sched is always about 450K (read) and 500K (randread)
> 
> Do you guys have any figures? Are my results as expected?

In yanhui's virt workload(qemu, libaio, dio, high queue depth, single
job), the patch can improve throughput much(>50%) when running
sequential write(dio, libaio, 16 jobs) to XFS. And it is observed that
IO merge is recovered to level of disabling host tags.

Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux