> +/* > + * Unlike blk_rq_map_user () this is only for fixed-buffer async passthrough. > + * And hopefully faster as well. > + */ This belongs into io_uring.c. And that hopeful comment needs to be validated and removed. > +int nvme_rq_map_user_fixedb(struct request_queue *q, struct request *rq, > + void __user *ubuf, unsigned long len, gfp_t gfp_mask, > + struct io_uring_cmd *ioucmd) > +{ > + struct iov_iter iter; > + size_t iter_count, nr_segs; > + struct bio *bio; > + int ret; > + > + /* > + * Talk to io_uring to obtain BVEC iterator for the buffer. > + * And use that iterator to form bio/request. > + */ > + ret = io_uring_cmd_import_fixed(ubuf, len, rq_data_dir(rq), &iter, > + ioucmd); io_uring_cmd_import_fixed takes a non-__user pointer, so this will cause a sparse warning.