Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/21/2021 5:33 PM, Jens Axboe wrote:
On 12/21/21 8:29 AM, Max Gurtovoy wrote:
On 12/21/2021 5:23 PM, Jens Axboe wrote:
On 12/21/21 3:20 AM, Max Gurtovoy wrote:
On 12/20/2021 8:58 PM, Jens Axboe wrote:
On 12/20/21 11:48 AM, Max Gurtovoy wrote:
On 12/20/2021 6:34 PM, Jens Axboe wrote:
On 12/20/21 8:29 AM, Max Gurtovoy wrote:
On 12/20/2021 4:19 PM, Jens Axboe wrote:
On 12/20/21 3:11 AM, Max Gurtovoy wrote:
On 12/19/2021 4:48 PM, Jens Axboe wrote:
On 12/19/21 5:14 AM, Max Gurtovoy wrote:
On 12/16/2021 7:16 PM, Jens Axboe wrote:
On 12/16/21 9:57 AM, Max Gurtovoy wrote:
On 12/16/2021 6:36 PM, Jens Axboe wrote:
On 12/16/21 9:34 AM, Max Gurtovoy wrote:
On 12/16/2021 6:25 PM, Jens Axboe wrote:
On 12/16/21 9:19 AM, Max Gurtovoy wrote:
On 12/16/2021 6:05 PM, Jens Axboe wrote:
On 12/16/21 9:00 AM, Max Gurtovoy wrote:
On 12/16/2021 5:48 PM, Jens Axboe wrote:
On 12/16/21 6:06 AM, Max Gurtovoy wrote:
On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
+				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
+		if (++nvmeq->sq_tail == nvmeq->q_depth)
+			nvmeq->sq_tail = 0;
So this doesn't even use the new helper added in patch 2?  I think this
should call nvme_sq_copy_cmd().
I also noticed that.

So need to decide if to open code it or use the helper function.

Inline helper sounds reasonable if you have 3 places that will use it.
Yes agree, that's been my stance too :-)

The rest looks identical to the incremental patch I posted, so I guess
the performance degration measured on the first try was a measurement
error?
giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.

But how do you moderate it ? what is the batch_sz <--> time_to_wait
algorithm ?
The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
in total. I do agree that if we ever made it much larger, then we might
want to cap it differently. But 32 seems like a pretty reasonable number
to get enough gain from the batching done in various areas, while still
not making it so large that we have a potential latency issue. That
batch count is already used consistently for other items too (like tag
allocation), so it's not specific to just this one case.
I'm saying that the you can wait to the batch_max_count too long and it
won't be efficient from latency POV.

So it's better to limit the block layar to wait for the first to come: x
usecs or batch_max_count before issue queue_rqs.
There's no waiting specifically for this, it's just based on the plug.
We just won't do more than 32 in that plug. This is really just an
artifact of the plugging, and if that should be limited based on "max of
32 or xx time", then that should be done there.

But in general I think it's saner and enough to just limit the total
size. If we spend more than xx usec building up the plug list, we're
doing something horribly wrong. That really should not happen with 32
requests, and we'll never eg wait on requests if we're out of tags. That
will result in a plug flush to begin with.
I'm not aware of the plug. I hope to get to it soon.

My concern is if the user application submitted only 28 requests and
then you'll wait forever ? or for very long time.

I guess not, but I'm asking how do you know how to batch and when to
stop in case 32 commands won't arrive anytime soon.
The plug is in the stack of the task, so that condition can never
happen. If the application originally asks for 32 but then only submits
28, then once that last one is submitted the plug is flushed and
requests are issued.
So if I'm running fio with --iodepth=28 what will plug do ? send batches
of 28 ? or 1 by 1 ?
--iodepth just controls the overall depth, the batch submit count
dictates what happens further down. If you run queue depth 28 and submit
one at the time, then you'll get one at the time further down too. Hence
the batching is directly driven by what the application is already
doing.
I see. Thanks for the explanation.

So it works only for io_uring based applications ?
It's only enabled for io_uring right now, but it's generically available
for anyone that wants to use it... Would be trivial to do for aio, and
other spots that currently use blk_start_plug() and has an idea of how
many IOs will be submitted
Can you please share an example application (or is it fio patches) that
can submit batches ? The same that was used to test this patchset is
fine too.

I would like to test it with our NVMe SNAP controllers and also to
develop NVMe/RDMA queue_rqs code and test the perf with it.
You should just be able to use iodepth_batch with fio. For my peak
testing, I use t/io_uring from the fio repo. By default, it'll run QD of
and do batches of 32 for complete and submit. You can just run:

t/io_uring <dev or file>

maybe adding -p0 for IRQ driven rather than polled IO.
I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
but it was never called using the t/io_uring test nor fio with
iodepth_batch=32 flag with io_uring engine.

Any idea what might be the issue ?

I installed fio from sources..
The two main restrictions right now are a scheduler and shared tags, are
you using any of those?
No.

But maybe I'm missing the .commit_rqs callback. is it mandatory for this
feature ?
I've only tested with nvme pci which does have it, but I don't think so.
Unless there's some check somewhere that makes it necessary. Can you
share the patch you're currently using on top?
The attached POC patches apply cleanly on block/for-next branch
Looks reasonable to me from a quick glance. Not sure why you're not
seeing it hit, maybe try and instrument
block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
called? As mentioned, no elevator or shared tags, should work for
anything else basically.
Yes. I saw that the blk layer converted the original non-shared tagset
of NVMe/RDMA to a shared one because of the nvmf connect request queue
that is using the same tagset (uses only the reserved tag).

So I guess this is the reason that the I couldn't reach the new code of
queue_rqs.

The question is how we can overcome this ?
Do we need to mark it shared for just the reserved tags? I wouldn't
think so...
We don't mark it. The block layer does it in blk_mq_add_queue_tag_set:

if (!list_empty(&set->tag_list) &&
              !(set->flags & BLK_MQ_F_TAG_QUEUE_SHARED))
Yes, that's what I meant, do we need to mark it as such for just the
reserved tags?

I'm afraid it doesn't related only to reserved tags.

If you have nvme device with 2 namespaces it will get to this code and mark it as shared set. And then the queue_rqs() won't be called for NVMe PCI as well.






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux