Re: [LSF/MM/BPF TOPIC] block drivers in user space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/21/22 20:59, Gabriel Krisman Bertazi wrote:
I'd like to discuss an interface to implement user space block devices,
while avoiding local network NBD solutions.  There has been reiterated
interest in the topic, both from researchers [1] and from the community,
including a proposed session in LSFMM2018 [2] (though I don't think it
happened).

I've been working on top of the Google iblock implementation to find
something upstreamable and would like to present my design and gather
feedback on some points, in particular zero-copy and overall user space
interface.

The design I'm pending towards uses special fds opened by the driver to
transfer data to/from the block driver, preferably through direct
splicing as much as possible, to keep data only in kernel space.  This
is because, in my use case, the driver usually only manipulates
metadata, while data is forwarded directly through the network, or
similar. It would be neat if we can leverage the existing
splice/copy_file_range syscalls such that we don't ever need to bring
disk data to user space, if we can avoid it.  I've also experimented
with regular pipes, But I found no way around keeping a lot of pipes
opened, one for each possible command 'slot'.

[1] https://dl.acm.org/doi/10.1145/3456727.3463768
[2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html

Actually, I'd rather have something like an 'inverse io_uring', where an application creates a memory region separated into several 'ring' for submission and completion. Then the kernel could write/map the incoming data onto the rings, and application can read from there.
Maybe it'll be worthwhile to look at virtio here.

But in either case, using fds or pipes for commands doesn't really scale, as the number of fds is inherently limited. And using fds restricts you to serial processing (as you can read only sequentially from a fd); with mmap() you'll get a greater flexibility and the option of parallel processing.

Cheers,

Hannes
--
Dr. Hannes Reinecke                Kernel Storage Architect
hare@xxxxxxx                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux