On Tue, Apr 5, 2022 at 11:32 AM Christoph Hellwig <hch@xxxxxx> wrote: > > On Mon, Apr 04, 2022 at 07:55:05PM +0530, Kanchan Joshi wrote: > > > Something like this (untested) patch should help to separate > > > the much better: > > > > It does, thanks. But the only thing is - it would be good to support > > vectored-passthru too (i.e. NVME_IOCTL_IO64_CMD_VEC) for this path. > > For the new opcode "NVME_URING_CMD_IO" , either we can change the > > cmd-structure or flag-based handling so that vectored-io is supported. > > Or we introduce NVME_URING_CMD_IO_VEC also for that. > > Which one do you prefer? > > I agree vectored I/O support is useful. > > Do we even need to support the non-vectored case? Would be good to have, I suppose. Helps keeping it simple when user-space wants to use a single-buffer (otherwise it must carry psuedo iovec for that too). > Also I think we'll want admin command passthrough on /dev/nvmeX as > well, but I'm fine solving the other items first. > > > > +static int nvme_ioctl_finish_metadata(struct bio *bio, int ret, > > > + void __user *meta_ubuf) > > > +{ > > > + struct bio_integrity_payload *bip = bio_integrity(bio); > > > + > > > + if (bip) { > > > + void *meta = bvec_virt(bip->bip_vec); > > > + > > > + if (!ret && bio_op(bio) == REQ_OP_DRV_IN && > > > + copy_to_user(meta_ubuf, meta, bip->bip_vec->bv_len)) > > > + ret = -EFAULT; > > > > Maybe it is better to move the check "bio_op(bio) != REQ_OP_DRV_IN" outside. > > Because this can be common, and for that we can avoid entering into > > the function call itself (i.e. nvme_ioctl_finish_metadata). > > Function calls are pretty cheap, but I'll see what we can do. I'll try > to come up with a prep series to refactor the passthrough support for > easier adding of the io_uring in the next days. In that case we will base the newer version on its top.