On 9/2/22 9:16 AM, Kanchan Joshi wrote: > Add blk_rq_map_user_bvec which maps the bvec iterator into a bio and > places that into the request. > This helper is to be used in nvme for uring-passthrough with > fixed-buffer. > > Signed-off-by: Kanchan Joshi <joshi.k@xxxxxxxxxxx> > Signed-off-by: Anuj Gupta <anuj20.g@xxxxxxxxxxx> > --- > block/blk-map.c | 71 ++++++++++++++++++++++++++++++++++++++++++ > include/linux/blk-mq.h | 1 + > 2 files changed, 72 insertions(+) > > diff --git a/block/blk-map.c b/block/blk-map.c > index f3768876d618..0f7dc568e34b 100644 > --- a/block/blk-map.c > +++ b/block/blk-map.c > @@ -612,6 +612,77 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq, > } > EXPORT_SYMBOL(blk_rq_map_user); > > +/* Prepare bio for passthrough IO given an existing bvec iter */ > +int blk_rq_map_user_bvec(struct request *rq, struct iov_iter *iter) > +{ > + struct request_queue *q = rq->q; > + size_t iter_count, nr_segs; > + struct bio *bio; > + struct bio_vec *bv, *bvec_arr, *bvprvp = NULL; > + struct queue_limits *lim = &q->limits; > + unsigned int nsegs = 0, bytes = 0; > + int ret, i; > + > + iter_count = iov_iter_count(iter); > + nr_segs = iter->nr_segs; > + > + if (!iter_count || (iter_count >> 9) > queue_max_hw_sectors(q)) > + return -EINVAL; > + if (nr_segs > queue_max_segments(q)) > + return -EINVAL; > + if (rq->cmd_flags & REQ_POLLED) { > + blk_opf_t opf = rq->cmd_flags | REQ_ALLOC_CACHE; > + > + /* no iovecs to alloc, as we already have a BVEC iterator */ > + bio = bio_alloc_bioset(NULL, 0, opf, GFP_KERNEL, > + &fs_bio_set); > + if (!bio) > + return -ENOMEM; > + } else { > + bio = bio_kmalloc(0, GFP_KERNEL); > + if (!bio) > + return -ENOMEM; > + bio_init(bio, NULL, bio->bi_inline_vecs, 0, req_op(rq)); > + } I think this should be a helper at this point, as it's the same duplicated code we have in the normal map path. -- Jens Axboe