On 11/26/19 4:54 PM, Ming Lei wrote:
On Tue, Nov 26, 2019 at 12:27:50PM +0100, Hannes Reinecke wrote:
On 11/26/19 12:05 PM, Ming Lei wrote:
[ .. ]
From performance viewpoint, all hctx belonging to this request queue should
share one scheduler tagset in case of BLK_MQ_F_TAG_HCTX_SHARED, cause
driver tag queue depth isn't changed.
Hmm. Now you get me confused.
In an earlier mail you said:
This kind of sharing is wrong, sched tags should be request
queue wide instead of tagset wide, and each request queue has
its own & independent scheduler queue.
as in v2 we _had_ shared scheduler tags, too.
Did I misread your comment above?
Yes, what I meant is that we can't share sched tags in tagset wide.
Now I mean we should share sched tags among all hctxs in same request
queue, and I believe I have described it clearly.
I wonder if this makes a big difference; in the end, scheduler tags are
primarily there to allow the scheduler to queue more requests, and
potentially merge them. These tags are later converted into 'real' ones
via blk_mq_get_driver_tag(), and only then the resource limitation takes
hold.
Wouldn't it be sufficient to look at the number of outstanding commands
per queue when getting a scheduler tag, and not having to implement yet
another bitmap?
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@xxxxxxx +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)