Re: [PATCH v5 00/25] RTRS (former IBTRS) rdma transport library and the corresponding RNBD (former IBNBD) rdma network block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 3, 2020 at 5:29 PM Bart Van Assche <bvanassche@xxxxxxx> wrote:
>
> On 1/3/20 4:39 AM, Jinpu Wang wrote:
> > Performance results for the v5.5-rc1 kernel are here:
> >    link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v5-v5.5-rc1
> >
> > Some workloads RNBD are faster, some workloads NVMeoF are faster.
>
> Thank you for having shared these graphs.
>
> Do the graphs in RNBD-SinglePath.pdf show that NVMeOF achieves similar
> or higher IOPS, higher bandwidth and lower latency than RNBD for
> workloads with a block size of 4 KB and also for mixed workloads with
> less than 20 disks, whether or not invalidation is enabled for RNBD?
Hi Bart,

Yes, that's the result on one pair of Server with Connect X4 HCA, I
did another test on another
2 servers with Connect X5 HCA, results are quite different, we will
double-check the
performance results also on old machines, will share new results later.


>
> Is it already clear why NVMeOF performance drops if the number of disks
> is above 25? Is that perhaps caused by contention on the block layer tag
> allocator because multiple NVMe namespaces share a tag set? Can that
> contention be avoided by increasing the NVMeOF queue depth further?
No yet, will check.
>
> Thanks,
>
> Bart.
>
>
Thanks



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux