Re: [LSF/MM TOPIC] NVMe Performance: Userspace vs Kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2019-02-16 at 00:53 +-0000, Felipe Franciosi wrote:
+AD4 On Feb 15, 2019, at 9:41 PM, Bart Van Assche +ADw-bvanassche+AEA-acm.org+AD4 wrote:
+AD4 +AD4 On Fri, 2019-02-15 at 21:19 +--0000, Felipe Franciosi wrote:
+AD4 +AD4 +AD4 Hi All,
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 I'd like to attend LSF/MM this year and discuss the kernel performance when accessing NVMe devices, specifically (but not limited to) Intel Optane Memory (which boasts very low latency and high
+AD4 +AD4 +AD4 iops/throughput per NVMe controller).
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 Over the last year or two, I have done extensive experimentation comparing applications using libaio to those using SDPK. For hypervisors, where storage devices can be exclusively accessed with
+AD4 +AD4 +AD4 userspace drivers (given the device can be dedicated to a single process), using SPDK has proven to be significantly faster and more efficient. That remains true even in the latest versions of
+AD4 +AD4 +AD4 the
+AD4 +AD4 +AD4 kernel.
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 I have presented work focusing on hypervisors in several conferences during this time. Although I appreciate the LSF/MM is more discussion-oriented, I am linking a couple of these presentations
+AD4 +AD4 +AD4 for
+AD4 +AD4 +AD4 reference:
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 Flash Memory Summit 2018
+AD4 +AD4 +AD4 https://urldefense.proofpoint.com/v2/url?u+AD0-https-3A+AF8AXw-www.flashmemorysummit.com+AF8-English+AF8-Collaterals+AF8-Proceedings+AF8-2018+AF8-20180808-5FSOFT-2D202-2D1-5FFranciosi.pdf+ACY-d+AD0-DwICgQ+ACY-c+AD0-s883GpUCOChKOHiocYtGcg+ACY-r+AD0
+AD4 +AD4 +AD4 CCrJKVC5zGot8PrnI-iYe00MdX4pgdQfMRigp14Ptmk+ACY-m+AD0-gpN7ZCCTDgrYuN3cZ0TceD2QUDAUeJEnwR0A-OUNju4+ACY-s+AD0-adckJANXPcBu177wFlVmO4pB3jPFZOdggibVfmLERr8+ACY-e+AD0
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 Linux Piter 2018
+AD4 +AD4 +AD4 https://urldefense.proofpoint.com/v2/url?u+AD0-https-3A+AF8AXw-linuxpiter.com+AF8-system+AF8-attachments+AF8-files+AF8-000+AF8-001+AF8-558+AF8-original+AF8-20181103-5F-2D-5FAHV-5Fand-5FSPDK.pdf+ACY-d+AD0-DwICgQ+ACY-c+AD0-s883GpUCOChKOHiocYtGcg+ACY-r+AD0-CCrJKV
+AD4 +AD4 +AD4 C5zGot8PrnI-iYe00MdX4pgdQfMRigp14Ptmk+ACY-m+AD0-gpN7ZCCTDgrYuN3cZ0TceD2QUDAUeJEnwR0A-OUNju4+ACY-s+AD0-TdfWF887BHpEvwQ+AF8AXw-kNNTFNe1uaKoIjDZ-gW4qr6UxE+ACY-e+AD0
+AD4 +AD4 +AD4 
+AD4 +AD4 +AD4 For LSF/MM, instead of focusing on hypervisors, I would like to discuss what can be done to achieve better efficiency and performance when using the kernel. My data include detailed results
+AD4 +AD4 +AD4 considering various scenarios like different NUMA configurations, IRQ affinity and polling modes.
+AD4 +AD4  
+AD4 +AD4 Hi Felipe,
+AD4 +AD4 
+AD4 +AD4 It seems like you missed the performance comparison between SPDK and io+AF8-uring
+AD4 +AD4 Jens posted recently?
+AD4 
+AD4 I configured 5.0-rc6 and had a look at the io+AF8-uring code. Finally worked out how to use FIO's t/io+AF8-uring to submit IO and poll completions without system calls. My +AF8-initial+AF8 numbers still show SPDK
+AD4 being faster and more efficient.
+AD4 
+AD4 Searching the lists, I found a few mentions that Jens published a comparison stating otherwise, but I can't find it. Could you please give me some pointers?

Hi Felipe,

This is probably what you are looking for:

https://lore.kernel.org/linux-block/20190116175003.17880-1-axboe+AEA-kernel.dk/

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux