On Wed, Apr 07, 2021 at 09:04:30AM +0100, John Garry wrote: > Reviewed-by: John Garry <john.garry@xxxxxxxxxx> > > > > On Tue, Apr 06, 2021 at 11:25:08PM +0100, John Garry wrote: > > > On 06/04/2021 04:19, Ming Lei wrote: > > > > > > Hi Ming, > > > > > > > Yanhui found that write performance is degraded a lot after applying > > > > hctx shared tagset on one test machine with megaraid_sas. And turns out > > > > it is caused by none scheduler which becomes default elevator caused by > > > > hctx shared tagset patchset. > > > > > > > > Given more scsi HBAs will apply hctx shared tagset, and the similar > > > > performance exists for them too. > > > > > > > > So keep previous behavior by still using default mq-deadline for queues > > > > which apply hctx shared tagset, just like before. > > > I think that there a some SCSI HBAs which have nr_hw_queues > 1 and don't > > > use shared sbitmap - do you think that they want want this as well (without > > > knowing it)? > > I don't know but none has been used for them since the beginning, so not > > an regression of shared tagset, but this one is really. > > It seems fine to revert to previous behavior when host_tagset is set. I > didn't check the results for this recently, but for the original shared > tagset patchset [0] I had: > > none sched: 2132K IOPS > mq-deadline sched: 2145K IOPS BTW, Yanhui reported that sequential write on virtio-scsi drops by 40~70% in VM, and the virito-scsi is backed by file image on XFS over megaraid_sas. And the disk is actually SSD, instead of HDD. It could be worse in case of megaraid_sas HDD. Same drop is observed on virtio-blk too. I didn't figure out one simple reproducer in host side yet, but the performance data is pretty stable in the VM IO workload. Thanks, Ming