Re: [PATCH v5 00/25] RTRS (former IBTRS) rdma transport library and the corresponding RNBD (former IBNBD) rdma network block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/3/20 4:39 AM, Jinpu Wang wrote:
Performance results for the v5.5-rc1 kernel are here:
   link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v5-v5.5-rc1

Some workloads RNBD are faster, some workloads NVMeoF are faster.

Thank you for having shared these graphs.

Do the graphs in RNBD-SinglePath.pdf show that NVMeOF achieves similar or higher IOPS, higher bandwidth and lower latency than RNBD for workloads with a block size of 4 KB and also for mixed workloads with less than 20 disks, whether or not invalidation is enabled for RNBD?

Is it already clear why NVMeOF performance drops if the number of disks is above 25? Is that perhaps caused by contention on the block layer tag allocator because multiple NVMe namespaces share a tag set? Can that contention be avoided by increasing the NVMeOF queue depth further?

Thanks,

Bart.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux