Sagi Grimberg <sagi@xxxxxxxxxxx> writes: >> Actually, I'd rather have something like an 'inverse io_uring', where >> an application creates a memory region separated into several 'ring' >> for submission and completion. >> Then the kernel could write/map the incoming data onto the rings, and >> application can read from there. >> Maybe it'll be worthwhile to look at virtio here. > > There is lio loopback backed by tcmu... I'm assuming that nvmet can > hook into the same/similar interface. nvmet is pretty lean, and we > can probably help tcmu/equivalent scale better if that is a concern... Sagi, I looked at tcmu prior to starting this work. Other than the tcmu overhead, one concern was the complexity of a scsi device interface versus sending block requests to userspace. What would be the advantage of doing it as a nvme target over delivering directly to userspace as a block driver? Also, when considering the case where userspace wants to just look at the IO descriptor, without actually sending data to userspace, I'm not sure that would be doable with tcmu? Another attempt to do the same thing here, now with device-mapper: https://patchwork.kernel.org/project/dm-devel/patch/20201203215859.2719888-4-palmer@xxxxxxxxxxx/ -- Gabriel Krisman Bertazi