Re: [LSF/MM/BPF BoF]: extend UBLK to cover real storage hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a great topic, so I'd like to be part of it as well.

It would be great to figure out what latency overhead we could expect
of ublk in the future, clarifying what use cases ublk could cater for.
This will help a lot in making decisions on what to implement
in-kernel vs user space.

Cheers,
Hans

On Mon, Feb 6, 2023 at 6:54 PM Hannes Reinecke <hare@xxxxxxx> wrote:
>
> On 2/6/23 16:00, Ming Lei wrote:
> > Hello,
> >
> > So far UBLK is only used for implementing virtual block device from
> > userspace, such as loop, nbd, qcow2, ...[1].
> >
> > It could be useful for UBLK to cover real storage hardware too:
> >
> > - for fast prototype or performance evaluation
> >
> > - some network storages are attached to host, such as iscsi and nvme-tcp,
> > the current UBLK interface doesn't support such devices, since it needs
> > all LUNs/Namespaces to share host resources(such as tag)
> >
> > - SPDK has supported user space driver for real hardware
> >
> > So propose to extend UBLK for supporting real hardware device:
> >
> > 1) extend UBLK ABI interface to support disks attached to host, such
> > as SCSI Luns/NVME Namespaces
> >
> > 2) the followings are related with operating hardware from userspace,
> > so userspace driver has to be trusted, and root is required, and
> > can't support unprivileged UBLK device
> >
> > 3) how to operating hardware memory space
> > - unbind kernel driver and rebind with uio/vfio
> > - map PCI BAR into userspace[2], then userspace can operate hardware
> > with mapped user address via MMIO
> >
> > 4) DMA
> > - DMA requires physical memory address, UBLK driver actually has
> > block request pages, so can we export request SG list(each segment
> > physical address, offset, len) into userspace? If the max_segments
> > limit is not too big(<=64), the needed buffer for holding SG list
> > can be small enough.
> >
> > - small amount of physical memory for using as DMA descriptor can be
> > pre-allocated from userspace, and ask kernel to pin pages, then still
> > return physical address to userspace for programming DMA
> >
> > - this way is still zero copy
> >
> > 5) notification from hardware: interrupt or polling
> > - SPDK applies userspace polling, this way is doable, but
> > eat CPU, so it is only one choice
> >
> > - io_uring command has been proved as very efficient, if io_uring
> > command is applied(similar way with UBLK for forwarding blk io
> > command from kernel to userspace) to uio/vfio for delivering interrupt,
> > which should be efficient too, given batching processes are done after
> > the io_uring command is completed
> >
> > - or it could be flexible by hybrid interrupt & polling, given
> > userspace single pthread/queue implementation can retrieve all
> > kinds of inflight IO info in very cheap way, and maybe it is likely
> > to apply some ML model to learn & predict when IO will be completed
> >
> > 6) others?
> >
> >
> Good idea.
> I'd love to have this discussion.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Kernel Storage Architect
> hare@xxxxxxx                              +49 911 74053 688
> SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
> HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
> Myers, Andrew McDonald, Martje Boudien Moerman
>




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux