Re: kernel null pointer at nvme_tcp_init_iter+0x7d/0xd0 [nvme_tcp]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





One obvious error is that nr_segments is computed wrong.

Yi, can you try the following patch?

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 881d28eb15e9..a393d99b74e1 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -239,9 +239,14 @@ static void nvme_tcp_init_iter(struct nvme_tcp_request *req,
   		offset = 0;
   	} else {
   		struct bio *bio = req->curr_bio;
+		struct bio_vec bv;
+		struct bvec_iter iter;
+
+		nsegs = 0;
+		bio_for_each_bvec(bv, bio, iter)
+			nsegs++;
   		vec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
-		nsegs = bio_segments(bio);

This was exactly the patch that caused the issue.

What was the issue you are talking about? Any link or commit hash?

The commit that caused the crash is:
0dc9edaf80ea nvme-tcp: pass multipage bvec to request iov_iter


nvme-tcp builds iov_iter(BVEC) from __bvec_iter_bvec(), the segment
number has to be the actual bvec number. But bio_segment() just returns
number of the single-page segment, which is wrong for iov_iter.

That is what I thought, but its causing a crash, and was fine with
bio_segments. So I'm trying to understand why is that.

Please see the same usage in lo_rw_aio().

nvme-tcp works on the bio basis to avoid bvec allocation
in the data path. Hence the iterator is fed directly by
the bio bvec and will re-initialize on every bio that
is spanned by the request.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux