Re: [PATCH] block: Improve shared tag set performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/20/23 10:09, Keith Busch wrote:
On Fri, Oct 20, 2023 at 09:45:53AM -0700, Bart Van Assche wrote:

On 10/20/23 09:25, Keith Busch wrote:
The legacy block request layer didn't have a tag resource shared among
multiple request queues. Each queue had their own mempool for allocating
requests. The mempool, I think, would always guarantee everyone could
get at least one request.

I think that the above is irrelevant in this context. As an example, SCSI
devices have always shared a pool of tags across multiple logical
units. This behavior has not been changed by the conversion of the
SCSI core from the legacy block layer to blk-mq.

For other (hardware) block devices it didn't matter either that there
was no upper limit to the number of requests the legacy block layer
could allocate. All hardware block devices I know support fixed size
queues for queuing requests to the block device.

I am not sure I understand your point. Those lower layers always were
able to get at least one request per request_queue. They can do whatever
they want with it after that. This change removes that guarantee for
blk-mq in some cases, right? I just don't think you can readily conclude
that is "safe" by appealing to the legacy behavior, that's all.

Hi Keith,

How requests were allocated in the legacy block layer is irrelevant in
this context. The patch I posted affects the tag allocation strategy.
Tag allocation happened in the legacy block layer by calling
blk_queue_start_tag(). From Linux kernel v4.20:

/**
 * blk_queue_start_tag - find a free tag and assign it
 * @q:  the request queue for the device
 * @rq:  the block request that needs tagging
 * [ ... ]
 **/

That function supports sharing tags between request queues but did not
attempt to be fair at all. This is how the SCSI core in Linux kernel v4.20 sets up tag sharing between request queues (from drivers/scsi/scsi_scan.c):

	if (!shost_use_blk_mq(sdev->host)) {
		blk_queue_init_tags(sdev->request_queue,
				    sdev->host->cmd_per_lun, shost->bqt,
				    shost->hostt->tag_alloc_policy);
	}

blk-mq has always had a fairness algorithm in case a tag set is shared
across request queues. If a tag set is shared across request queues, the
number of tags per request queue is restricted to the total number of
tags divided by the number of users (ignoring rounding). From
block/blk-mq.c in the latest kernel:

	depth = max((bt->sb.depth + users - 1) / users, 4U);

What my patch does is to remove this fairness guarantee. There was no
equivalent of this fairness guarantee in the legacy block layer.

Thanks,

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux