Re: [LSF/MM/BPF TOPIC] block drivers in user space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sagi Grimberg <sagi@xxxxxxxxxxx> writes:

>> Actually, I'd rather have something like an 'inverse io_uring', where
>> an application creates a memory region separated into several 'ring'
>> for submission and completion.
>> Then the kernel could write/map the incoming data onto the rings, and
>> application can read from there.
>> Maybe it'll be worthwhile to look at virtio here.
>
> There is lio loopback backed by tcmu... I'm assuming that nvmet can
> hook into the same/similar interface. nvmet is pretty lean, and we
> can probably help tcmu/equivalent scale better if that is a concern...

Sagi,

I looked at tcmu prior to starting this work.  Other than the tcmu
overhead, one concern was the complexity of a scsi device interface
versus sending block requests to userspace.

What would be the advantage of doing it as a nvme target over delivering
directly to userspace as a block driver?

Also, when considering the case where userspace wants to just look at the IO
descriptor, without actually sending data to userspace, I'm not sure
that would be doable with tcmu?

Another attempt to do the same thing here, now with device-mapper:

https://patchwork.kernel.org/project/dm-devel/patch/20201203215859.2719888-4-palmer@xxxxxxxxxxx/

-- 
Gabriel Krisman Bertazi



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux