Re: [LSF/MM/BPF TOPIC] block drivers in user space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Sagi,

Haven't these use cases already been mentioned in the email at the start of this thread? The use cases I am aware of are implementing cloud-specific block storage functionality and also block storage in user space for Android. Having to parse NVMe commands and PRP or SGL lists would be an unnecessary source of complexity and overhead for these use cases. My understanding is that what is needed for these use cases is something that is close to the block layer request interface (REQ_OP_* + request flags + data buffer).


Curiously, the former was exactly my idea. I was thinking about having a simple nvmet userspace driver where all the transport 'magic' was handled in the nvmet driver, and just the NVMe SQEs passed on to the userland driver. The userland driver would then send the CQEs back to the driver. With that the kernel driver becomes extremely simple, and would allow userspace to do all the magic it wants. More to the point, one could implement all sorts of fancy features which are out of scope for the current nvmet implementation.

My thinking is that this simplification can be done in a userland
core library with a simpler interface for backends to plug into (or
a richer interface if that is what the use-case warrants).

Which is why I've been talking about 'inverse' io_uring; the userland driver will have to wait for SQEs, and write CQEs back to the driver.

"inverse" io_uring is just a ring interface, tcmu has it as well, I'm
assuming you are talking about the scalability attributes of it...



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux