[LSF/MM/BPF BoF]: extend UBLK to cover real storage hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

So far UBLK is only used for implementing virtual block device from
userspace, such as loop, nbd, qcow2, ...[1].

It could be useful for UBLK to cover real storage hardware too:

- for fast prototype or performance evaluation

- some network storages are attached to host, such as iscsi and nvme-tcp,
the current UBLK interface doesn't support such devices, since it needs
all LUNs/Namespaces to share host resources(such as tag)

- SPDK has supported user space driver for real hardware

So propose to extend UBLK for supporting real hardware device:

1) extend UBLK ABI interface to support disks attached to host, such
as SCSI Luns/NVME Namespaces

2) the followings are related with operating hardware from userspace,
so userspace driver has to be trusted, and root is required, and
can't support unprivileged UBLK device

3) how to operating hardware memory space
- unbind kernel driver and rebind with uio/vfio
- map PCI BAR into userspace[2], then userspace can operate hardware
with mapped user address via MMIO

4) DMA
- DMA requires physical memory address, UBLK driver actually has
block request pages, so can we export request SG list(each segment
physical address, offset, len) into userspace? If the max_segments
limit is not too big(<=64), the needed buffer for holding SG list
can be small enough.

- small amount of physical memory for using as DMA descriptor can be
pre-allocated from userspace, and ask kernel to pin pages, then still
return physical address to userspace for programming DMA

- this way is still zero copy

5) notification from hardware: interrupt or polling
- SPDK applies userspace polling, this way is doable, but
eat CPU, so it is only one choice

- io_uring command has been proved as very efficient, if io_uring
command is applied(similar way with UBLK for forwarding blk io
command from kernel to userspace) to uio/vfio for delivering interrupt,
which should be efficient too, given batching processes are done after
the io_uring command is completed

- or it could be flexible by hybrid interrupt & polling, given
userspace single pthread/queue implementation can retrieve all
kinds of inflight IO info in very cheap way, and maybe it is likely
to apply some ML model to learn & predict when IO will be completed

6) others?



[1] https://github.com/ming1/ubdsrv
[2] https://spdk.io/doc/userspace.html
 

Thanks, 
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux