On Tue, Dec 17, 2024 at 12:14 AM Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote: > > On Mon, 16 Dec 2024 at 10:54, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > > > > Hacking passthrough into virtio_blk seems like not very good layering. > > If you have a use case where you want to use the core kernel virtio code > > but not the protocol drivers we'll probably need a virtqueue passthrough > > option of some kind. > > I think people are finding that submitting I/O via uring_cmd is faster > than traditional io_uring. The use case isn't really passthrough, it's > bypass :). > > That's why I asked Jens to weigh in on whether there is a generic > block layer solution here. If uring_cmd is faster then maybe a generic > uring_cmd I/O interface can be defined without tying applications to > device-specific commands. Or maybe the traditional io_uring code path > can be optimized so that bypass is no longer attractive. > > The virtio-level virtqueue passthrough idea is interesting for use > cases that mix passthrough applications with non-passthrough > applications. VFIO isn't enough because it prevents sharing and > excludes non-passthrough applications. Something similar to VDPA > might be able to pass through just a subset of virtqueues that > userspace could access via the vhost_vdpa driver. I thought it could be reused as a mixing approach like this. The vDPA driver might just do a shadow virtqueue so in fact we just replace io_uring here with the virtqueue. Or if we think vDPA is heavyweight, vhost-blk could be another way. > This approach > doesn't scale if many applications are running at the same time > because the number of virtqueues is finite and often the same as the > number of CPUs. > > Stefan > Thanks