Re: [LSF/MM TOPIC] NVMe Performance: Userspace vs Kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Keith,

> On Feb 15, 2019, at 9:47 PM, Keith Busch <keith.busch@xxxxxxxxx> wrote:
> 
> On Fri, Feb 15, 2019 at 09:19:02PM +0000, Felipe Franciosi wrote:
>> Over the last year or two, I have done extensive experimentation comparing applications using libaio to those using SDPK. 
> 
> Try the io_uring interface instead. Its queued up in the linux-block
> for-next tree.

I just read about that based on the other response from Bart. Thanks for pointing it out.

> 
>> For hypervisors, where storage devices can be exclusively accessed with userspace drivers (given the device can be dedicated to a single process), using SPDK has proven to be significantly faster and more efficient.
> 
> It doesn't work so well for file based or multi-device backing
> storage. But if you are sequestering an entire controller over to a VM,
> direct-assign/device-passthrough is usually also an option and that
> ought to be even faster.

The advantage is to dedicate a controller to "the hypervisor" (ie. one userspace process responsible for mediating access between multiple VMs). Some VMs may choose to use userspace drivers, too. Others can use traditional kernel datapaths. The average overhead we have measured from virtual machines in this setup is neglectable.

I did not experience problems with multiple devices, but surely careful thought is required for the data format.

Cheers,
Felipe



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux