Re: [Question] about shared tags for SCSI drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/01/2020 09:03, Ming Lei wrote:

(fixed title)

On Thu, Jan 16, 2020 at 12:06:02PM +0800, Yufen Yu wrote:
Hi, all

Shared tags is introduced to maintains a notion of fairness between
active users. This may be good for nvme with multiple namespace to
avoid starving some users. Right?

Actually nvme namespace is LUN of scsi world.

Shared tags isn't for maintaining fairness, it is just natural sw
implementation of scsi host's tags, since every scsi host shares
tags among all LUNs. If the SCSI host supports real MQ, the tags
is hw-queue wide, otherwise it is host wide.


However, I don't understand why we introduce the shared tag for SCSI.
IMO, there are two concerns for scsi shared tag:

1) For now, 'shost->can_queue' is used as queue depth in block layer.

Note that in scsi_alloc_sdev(), sdev default queue depth is set to shost->cmd_per_lun. Slightly different for ATA devices - see ata_scsi_dev_config().

There is then also the scsi_host_template.change_queue_depth() callback for SCSI hosts can use to set this.

And all target drivers share tags on one host. Then, the max tags for
each target can get:

	depth = max((bt->sb.depth + users - 1) / users, 4U);

The host HW is limited by the amount of simultaneous commands it can issue, regardless of where these tags are managed.

And you seem to be assuming in this equation that all users will own a subset of host tags, which is not true.


But, each target driver may have their own capacity of tags and queue depth.
Does shared tag limit target device bandwidth?

No, if the 'target driver' means LUN, each LUN hasn't its independent
tags, maybe it has its own queue depth, which is often for maintaining
fairness among all active LUNs, not real queue depth.

You may see the patches[1] which try to bypass per-LUN queue depth for SSD.

[1] https://lore.kernel.org/linux-block/20191118103117.978-1-ming.lei@xxxxxxxxxx/


2) When add new target or remove device, it may need to freeze other device
to update hctx->flags of BLK_MQ_F_TAG_SHARED. That may hurt performance.

Add/removing device isn't a frequent event, so it shouldn't be a real
issue, or you have seen effect on real use case?


Recently we discuss abort hostwide shared tags for SCSI[0] and sharing tags
across hardware queues[1]. These discussion are abort shared tag. But, I
confuse whether shared tag across hardware queues can solve my concerns as mentioned.

Both [1] and [0] are for converting some single queue SCSI host into MQ
because these HBAs support multiple reply queue for completing request,
meantime they only have single tags(so they are SQ driver now). So far
not many such kind of hardwares(HPSA, hisi sas, megaraid_sas, ...).


Thanks,
Ming


thanks Ming

.





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux