Re: Kernel NBD client waits on wrong cookie, aborts connection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 15, 2024 at 08:15:17PM +0800, Ming Lei wrote:
> On Tue, Oct 15, 2024 at 8:11 PM Ming Lei <ming.lei@xxxxxxxxxx> wrote:
> >
> > On Tue, Oct 15, 2024 at 08:01:43PM +0800, Ming Lei wrote:
> > > On Tue, Oct 15, 2024 at 6:22 PM Kevin Wolf <kwolf@xxxxxxxxxx> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > the other day I was running some benchmarks to compare different QEMU
> > > > block exports, and one of the scenarios I was interested in was
> > > > exporting NBD from qemu-storage-daemon over a unix socket and attaching
> > > > it as a block device using the kernel NBD client. I would then run a VM
> > > > on top of it and fio inside of it.
> > > >
> > > > Unfortunately, I couldn't get any numbers because the connection always
> > > > aborted with messages like "Double reply on req ..." or "Unexpected
> > > > reply ..." in the host kernel log.
> > > >
> > > > Yesterday I found some time to have a closer look why this is happening,
> > > > and I think I have a rough understanding of what's going on now. Look at
> > > > these trace events:
> > > >
> > > >         qemu-img-51025   [005] ..... 19503.285423: nbd_header_sent: nbd transport event: request 000000002df03708, handle 0x0000150c0000005a
> > > > [...]
> > > >         qemu-img-51025   [008] ..... 19503.285500: nbd_payload_sent: nbd transport event: request 000000002df03708, handle 0x0000150c0000005d
> > > > [...]
> > > >    kworker/u49:1-47350   [004] ..... 19503.285514: nbd_header_received: nbd transport event: request 00000000b79e7443, handle 0x0000150c0000005a
> > > >
> > > > This is the same request, but the handle has changed between
> > > > nbd_header_sent and nbd_payload_sent! I think this means that we hit one
> > > > of the cases where the request is requeued, and then the next time it
> > > > is executed with a different blk-mq tag, which is something the nbd
> > > > driver doesn't seem to expect.
> > > >
> > > > Of course, since the cookie is transmitted in the header, the server
> > > > replies with the original handle that contains the tag from the first
> > > > call, while the kernel is only waiting for a handle with the new tag and
> > > > is confused by the server response.
> > > >
> > > > I'm not sure yet which of the following options should be considered the
> > > > real problem here, so I'm only describing the situation without trying
> > > > to provide a patch:
> > > >
> > > > 1. Is it that blk-mq should always re-run the request with the same tag?
> > > >    I don't expect so, though in practice I was surprised to see that it
> > > >    happens quite often after nbd requeues a request that it actually
> > > >    does end up with the same cookie again.
> > >
> > > No.
> > >
> > > request->tag will change, but we may take ->internal_tag(sched) or
> > > ->tag(none), which won't change.
> > >
> > > I guess was_interrupted() in nbd_send_cmd() is triggered, then the payload
> > > is sent with a different tag.
> > >
> > > I will try to cook one patch soon.
> >
> > Please try the following patch:
> 
> Oops, please ignore the patch, it can't work since
> nbd_handle_reply() doesn't know static tag.

Please try the v2:


diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 2cafcf11ee8b..8bad4030b2e4 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -682,3 +682,31 @@ u32 blk_mq_unique_tag(struct request *rq)
 		(rq->tag & BLK_MQ_UNIQUE_TAG_MASK);
 }
 EXPORT_SYMBOL(blk_mq_unique_tag);
+
+/*
+ * Same with blk_mq_unique_tag, but one persistent tag is included in
+ * the request lifetime.
+ */
+u32 blk_mq_unique_static_tag(struct request *rq)
+{
+	u32 tag = rq->q->elevator ? rq->internal_tag : rq->tag;
+
+	return (rq->mq_hctx->queue_num << BLK_MQ_UNIQUE_TAG_BITS) |
+		(tag & BLK_MQ_UNIQUE_TAG_MASK);
+}
+EXPORT_SYMBOL(blk_mq_unique_static_tag);
+
+struct request *blk_mq_static_tag_to_req(struct request_queue *q, u32 uniq_tag)
+{
+	unsigned long hwq = blk_mq_unique_tag_to_hwq(uniq_tag);
+	u32 tag = blk_mq_unique_tag_to_tag(uniq_tag);
+	const struct blk_mq_hw_ctx *hctx= xa_load(&q->hctx_table, hwq);
+
+	if (!hctx)
+		return NULL;
+
+	if (q->elevator)
+		return hctx->sched_tags->static_rqs[tag];
+	return hctx->tags->static_rqs[tag];
+}
+EXPORT_SYMBOL(blk_mq_static_tag_to_req);
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index b852050d8a96..5be324233c9f 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -201,7 +201,7 @@ static void nbd_requeue_cmd(struct nbd_cmd *cmd)
 static u64 nbd_cmd_handle(struct nbd_cmd *cmd)
 {
 	struct request *req = blk_mq_rq_from_pdu(cmd);
-	u32 tag = blk_mq_unique_tag(req);
+	u32 tag = blk_mq_unique_static_tag(req);
 	u64 cookie = cmd->cmd_cookie;
 
 	return (cookie << NBD_COOKIE_BITS) | tag;
@@ -818,10 +818,7 @@ static struct nbd_cmd *nbd_handle_reply(struct nbd_device *nbd, int index,
 
 	handle = be64_to_cpu(reply->cookie);
 	tag = nbd_handle_to_tag(handle);
-	hwq = blk_mq_unique_tag_to_hwq(tag);
-	if (hwq < nbd->tag_set.nr_hw_queues)
-		req = blk_mq_tag_to_rq(nbd->tag_set.tags[hwq],
-				       blk_mq_unique_tag_to_tag(tag));
+	req = blk_mq_static_tag_to_req(nbd->disk->queue, tag);
 	if (!req || !blk_mq_request_started(req)) {
 		dev_err(disk_to_dev(nbd->disk), "Unexpected reply (%d) %p\n",
 			tag, req);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 4fecf46ef681..9c4ef3f16a77 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -793,6 +793,8 @@ enum {
 };
 
 u32 blk_mq_unique_tag(struct request *rq);
+u32 blk_mq_unique_static_tag(struct request *rq);
+struct request *blk_mq_static_tag_to_req(struct request_queue *q, u32 tag);
 
 static inline u16 blk_mq_unique_tag_to_hwq(u32 unique_tag)
 {

-- 
Ming





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux