On 1/3/20 4:39 AM, Jinpu Wang wrote:
Performance results for the v5.5-rc1 kernel are here:
link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v5-v5.5-rc1
Some workloads RNBD are faster, some workloads NVMeoF are faster.
Thank you for having shared these graphs.
Do the graphs in RNBD-SinglePath.pdf show that NVMeOF achieves similar
or higher IOPS, higher bandwidth and lower latency than RNBD for
workloads with a block size of 4 KB and also for mixed workloads with
less than 20 disks, whether or not invalidation is enabled for RNBD?
Is it already clear why NVMeOF performance drops if the number of disks
is above 25? Is that perhaps caused by contention on the block layer tag
allocator because multiple NVMe namespaces share a tag set? Can that
contention be avoided by increasing the NVMeOF queue depth further?
Thanks,
Bart.