Re: [PATCH 08/17] nvme: enable passthrough with fixed-buffer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 08, 2022 at 08:50:56PM +0530, Kanchan Joshi wrote:
> From: Anuj Gupta <anuj20.g@xxxxxxxxxxx>
> 
> Add support to carry out passthrough command with pre-mapped buffers.
> 
> Signed-off-by: Anuj Gupta <anuj20.g@xxxxxxxxxxx>
> Signed-off-by: Kanchan Joshi <joshi.k@xxxxxxxxxxx>
> ---
>  block/blk-map.c           | 45 +++++++++++++++++++++++++++++++++++++++
>  drivers/nvme/host/ioctl.c | 27 ++++++++++++++---------
>  include/linux/blk-mq.h    |  2 ++
>  3 files changed, 64 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-map.c b/block/blk-map.c
> index 4526adde0156..027e8216e313 100644
> --- a/block/blk-map.c
> +++ b/block/blk-map.c
> @@ -8,6 +8,7 @@
>  #include <linux/bio.h>
>  #include <linux/blkdev.h>
>  #include <linux/uio.h>
> +#include <linux/io_uring.h>
>  
>  #include "blk.h"
>  
> @@ -577,6 +578,50 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
>  }
>  EXPORT_SYMBOL(blk_rq_map_user);
>  
> +/* Unlike blk_rq_map_user () this is only for fixed-buffer async passthrough. */
> +int blk_rq_map_user_fixedb(struct request_queue *q, struct request *rq,
> +		     u64 ubuf, unsigned long len, gfp_t gfp_mask,
> +		     struct io_uring_cmd *ioucmd)
> +{
> +	struct iov_iter iter;
> +	size_t iter_count, nr_segs;
> +	struct bio *bio;
> +	int ret;
> +
> +	/*
> +	 * Talk to io_uring to obtain BVEC iterator for the buffer.
> +	 * And use that iterator to form bio/request.
> +	 */
> +	ret = io_uring_cmd_import_fixed(ubuf, len, rq_data_dir(rq), &iter,
> +			ioucmd);
> +	if (unlikely(ret < 0))
> +		return ret;
> +	iter_count = iov_iter_count(&iter);
> +	nr_segs = iter.nr_segs;
> +
> +	if (!iter_count || (iter_count >> 9) > queue_max_hw_sectors(q))
> +		return -EINVAL;
> +	if (nr_segs > queue_max_segments(q))
> +		return -EINVAL;
> +	/* no iovecs to alloc, as we already have a BVEC iterator */
> +	bio = bio_alloc(gfp_mask, 0);
> +	if (!bio)
> +		return -ENOMEM;
> +
> +	ret = bio_iov_iter_get_pages(bio, &iter);

Here bio_iov_iter_get_pages() may not work as expected since the code
needs to check queue limit before adding page to bio and we don't run
split for passthrough bio. __bio_iov_append_get_pages() may be generalized
for covering this case.


Thanks, 
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux