Re: [LSF/MM/BPF TOPIC] block drivers in user space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gabriel,
There is a project implement userspace block device: https://github.com/ubbd/ubbd

ubbd means Userspace Backend Block Device (as ubd was taken by UML block device.)

It has a kernel module ubbd.ko, userspace daemon ubbdd, and a tool ubbdadm.
(1) ubbd.ko depends on uio, providing cmd ring and complete ring.
(2) ubbdd implement different backends, currently file and rbd is done.
and ubbdd is designed for online restart. That means you can restart ubbdd with IO inflight.
(3) ubbdadm is an admin tool, providing map, unmap and config.

This project is under developing stage, but it is already tested by blktests and xfstests.

Also, there is some testing for rbd backend to compare performance with librbd and krbd.

You can find more information about it in github, if you are interested in it.

Thanx

在 2022/2/22 星期二 上午 3:59, Gabriel Krisman Bertazi 写道:
I'd like to discuss an interface to implement user space block devices,
while avoiding local network NBD solutions.  There has been reiterated
interest in the topic, both from researchers [1] and from the community,
including a proposed session in LSFMM2018 [2] (though I don't think it
happened).

I've been working on top of the Google iblock implementation to find
something upstreamable and would like to present my design and gather
feedback on some points, in particular zero-copy and overall user space
interface.

The design I'm pending towards uses special fds opened by the driver to
transfer data to/from the block driver, preferably through direct
splicing as much as possible, to keep data only in kernel space.  This
is because, in my use case, the driver usually only manipulates
metadata, while data is forwarded directly through the network, or
similar. It would be neat if we can leverage the existing
splice/copy_file_range syscalls such that we don't ever need to bring
disk data to user space, if we can avoid it.  I've also experimented
with regular pipes, But I found no way around keeping a lot of pipes
opened, one for each possible command 'slot'.

[1] https://dl.acm.org/doi/10.1145/3456727.3463768
[2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux