On 2/10/21 2:01 AM, Sagi Grimberg wrote:
Thanks for reporting Ming, I've tried to reproduce this on my VM
but did not succeed. Given that you have it 100% reproducible,
can you try to revert commit:
0dc9edaf80ea nvme-tcp: pass multipage bvec to request iov_iter
Revert this commit fixed the issue and I've attached the config. :)
Hey Ming,
Instead of revert, does this patch makes the issue go away?
Hi Sagi
Below patch fixed the issue, let me know if you need more testing. :)
Thanks Yi,
So it's nvme_admin_abort_cmd here
[ 74.017450] run blktests nvme/012 at 2021-02-09 21:41:55
[ 74.111311] loop: module loaded
[ 74.125717] loop0: detected capacity change from 2097152 to 0
[ 74.141026] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 74.149395] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 74.158298] nvmet: creating controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:41131d88-02ca-4ccc-87b3-6ca3f28b13a4.
[ 74.158742] nvme nvme0: creating 48 I/O queues.
[ 74.163391] nvme nvme0: mapped 48/0/0 default/read/poll queues.
[ 74.184623] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[ 75.235059] nvme_tcp: rq 38 opcode 8
[ 75.238653] blk_update_request: I/O error, dev nvme0c0n1, sector
1048624 op 0x9:(WRITE_ZEROES) flags 0x2800800 phys_seg 0 prio class 0
[ 75.380179] XFS (nvme0n1): Mounting V5 Filesystem
[ 75.387457] XFS (nvme0n1): Ending clean mount
[ 75.388555] xfs filesystem being mounted at /mnt/blktests supports
timestamps until 2038 (0x7fffffff)
[ 91.035659] XFS (nvme0n1): Unmounting Filesystem
[ 91.043334] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
I'll submit a proper patch, but can you run this change
to see what command has a bio but without any data?
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 619b0d8f6e38..311f1b78a9d4 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2271,8 +2271,13 @@ static blk_status_t
nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
req->data_len = blk_rq_nr_phys_segments(rq) ?
blk_rq_payload_bytes(rq) : 0;
req->curr_bio = rq->bio;
- if (req->curr_bio)
+ if (req->curr_bio) {
+ if (!req->data_len) {
+ pr_err("rq %d opcode %d\n", rq->tag,
pdu->cmd.common.opcode);
+ return BLK_STS_IOERR;
+ }
nvme_tcp_init_iter(req, rq_data_dir(rq));
+ }
if (rq_data_dir(rq) == WRITE &&
req->data_len <= nvme_tcp_inline_data_size(queue))
--