On Fri, Apr 08, 2022 at 02:52:35PM +0800, Xiaoguang Wang wrote: > hi, > > > On Tue, Feb 22, 2022 at 07:57:27AM +0100, Hannes Reinecke wrote: > >> On 2/21/22 20:59, Gabriel Krisman Bertazi wrote: > >>> I'd like to discuss an interface to implement user space block devices, > >>> while avoiding local network NBD solutions. There has been reiterated > >>> interest in the topic, both from researchers [1] and from the community, > >>> including a proposed session in LSFMM2018 [2] (though I don't think it > >>> happened). > >>> > >>> I've been working on top of the Google iblock implementation to find > >>> something upstreamable and would like to present my design and gather > >>> feedback on some points, in particular zero-copy and overall user space > >>> interface. > >>> > >>> The design I'm pending towards uses special fds opened by the driver to > >>> transfer data to/from the block driver, preferably through direct > >>> splicing as much as possible, to keep data only in kernel space. This > >>> is because, in my use case, the driver usually only manipulates > >>> metadata, while data is forwarded directly through the network, or > >>> similar. It would be neat if we can leverage the existing > >>> splice/copy_file_range syscalls such that we don't ever need to bring > >>> disk data to user space, if we can avoid it. I've also experimented > >>> with regular pipes, But I found no way around keeping a lot of pipes > >>> opened, one for each possible command 'slot'. > >>> > >>> [1] https://dl.acm.org/doi/10.1145/3456727.3463768 > >>> [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html > >>> > >> Actually, I'd rather have something like an 'inverse io_uring', where an > >> application creates a memory region separated into several 'ring' for > >> submission and completion. > >> Then the kernel could write/map the incoming data onto the rings, and > >> application can read from there. > >> Maybe it'll be worthwhile to look at virtio here. > > IMO it needn't 'inverse io_uring', the normal io_uring SQE/CQE model > > does cover this case, the userspace part can submit SQEs beforehand > > for getting notification of each incoming io request from kernel driver, > > then after one io request is queued to the driver, the driver can > > queue a CQE for the previous submitted SQE. Recent posted patch of > > IORING_OP_URING_CMD[1] is perfect for such purpose. > > > > I have written one such userspace block driver recently, and [2] is the > > kernel part blk-mq driver(ubd driver), the userspace part is ubdsrv[3]. > > Both the two parts look quite simple, but still in very early stage, so > > far only ubd-loop and ubd-null targets are implemented in [3]. Not only > > the io command communication channel is done via IORING_OP_URING_CMD, but > > also IO handling for ubd-loop is implemented via plain io_uring too. > > > > It is basically working, for ubd-loop, not see regression in 'xfstests -g auto' > > on the ubd block device compared with same xfstests on underlying disk, and > > my simple performance test on VM shows the result isn't worse than kernel loop > > driver with dio, or even much better on some test situations. > I also have spent time to learn your codes, its idea is really good, thanks for this > great work. Though we're using tcmu, indeed we just needs a simple block device > based on block semantics. Tcmu is based on scsi protocol, which is somewhat > complicated and influences small io request performance. So if you like, we're > willing to participate this project, and may use it in our internal business, thanks. That is great, and welcome to participate! Glad to see there is real potential user of userspace block device. I believe there are lots of thing to do in this area, but so far: 1) consolidate the interface between ubd driver and ubdsrv, since this part is kabi 2) consolidate design in ubdsrv(userspace part), so that we can support different backing or target easily, one idea is to handle all io request via io_uring. 3) consolidate design in ubdsrv for providing stable interface to support advanced languages(python, rust, ...), and inevitable one new complicated target/backing should be developed meantime, such as qcow2, or other real/popular device. I plan to post driver formal patches out after the patchset of io_uring command interface is merged, but maybe we can make it soon for early review. And the driver side should be kept as simple as possible, and as efficient as possible. It just focuses on : forward io request to userspace and handle data copy or zero copy, and ubd driver won't store any state of backing/target. Also actual performance is really sensitive with batching handling. Recently, I take task_work_add() to improve batching, and easy to observe performance boot. Another related part is how to implement zero copy, which exists on tcmu or other projects too. > > Another little question, why you use raw io_uring interface rather than liburing? > Are there any special reasons? It is just for building ubdsrv easily without any dependency, and it definitely will switch to liburing. And the change should be quite simple, since the related glue code is put in one source file, and the current interface is similar with liburing's too. Thanks, Ming