Re: [question]Why our soft-RoCE throughput is quite low compared with TCP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 18, 2019 at 02:38:19PM +0800, wangqi wrote:
> On 2019/11/16 上午12:07, Leon Romanovsky wrote:
>
> > On Fri, Nov 15, 2019 at 09:26:41PM +0800, QWang wrote:
> >> Dear experts on RDMA,
> >>       We are sorry to disturb you. Because of a project, we need to
> >> integrate soft-RoCE in our system. However ,we are very confused by our
> >> soft-RoCE throughput results, which are quite low compared with TCP
> >> throughput. The throughput of soft-RoCE in our tests measured by ib_send_bw
> >> and ib_read_bw is only 2 Gbps (the net link bandwidth is 100 Gbps and the
> >> two Xeon E5 servers with Mellanox ConnectX-4 cards are connected via
> >> back-to-back, the OS is ubuntu16.04 with kernel 4.15.0-041500-generic). The
> >> throughput of hard-RoCE and TCP are normal, which are 100 Gbps and 20 Gbps,
> >> respectively. But in the figure 6 in the attached paper "A Performance
> >> Comparison of Container Networking Alternatives", the throughput of
> >> soft-RoCE can be up to 23 Gbps.  In our tests, we get the open-source
> >> soft-RoCE from github in https://github.com/linux-rdma. Do you know how can
> >> we get such high bandwidth? Do we need to configure some OS system settings?
> >>       We find that in 2017, someone finds the same problem and he posts all
> >> his detailed results on https://bugzilla.kernel.org/show_bug.cgi?id=190951  ;
> >> . But it remains unsolved. His results are nearly the same with our's. For
> >> simplicity,  we do not post our results in this email. You can get very
> >> detailed information in the web page listed above.
> >>       We are very confused by our results. We will very appreciate it if we
> >> can receive your early reply. Best wishes,
> >> Wang Qi
> > Can you please fix your email client?
> > The email text looks like one big sentence.
> >
> > From the perf report attached to this bugzilla, looks like RXE does a
> > lot of CRC32 calculations and it is consistent with what Matan said
> > a long time ago, RXE "stuck" in ICRC calculations required by spec.
> >
> > I'm curios what are your CONFIG_CRYPTO_* configs?
> >
> > ThanksCONFIG_CRYPTO_* configs
> >
> >
>
>
> I'm sorry for the editor problem in my last email. Now I use another editor.

Now your email has extra line between lines.

>
> We get our rdma-core and perftest from
>
> https://github.com/linux-rdma/rdma-core/archive/v25.0.tar.gz
> and https://github.com/linux-rdma/perftest/archive/4.4-0.8.tar.gz, respectively.
>
> We attach five files to clarify our problem.
>
> * The first file "server_tcp_vs_softroce_performance.txt" is the results of TCP
>
> and softroce throughput in our two servers (connected via back to back).
>
> * The second file "server_CONFIG_CRYPTO_result.txt" is the
>
> CONFIG_CRYPTO_* config results in the two servers..
>
> * The third file "server_perf.txt" is the "ib_send_bw - n 10000 192.168.0.20
>
> & perf record -ags sleep 10 & wait" results in our two servers (we use
>
> "perf report --header >perf" to make the file).
>
> * The fourth file "vm_tcp_vs_softroce_performance.txt" is the results of TCP
>
> and softroce throughput in two virtual machines with the latest linux kernel
>
> 5.4.0-rc7
>
> (we get the kernel from https://github.com/torvalds/linux/archive/v5.4-rc7.zip).
>
> * The fifth  file "vm_CONFIG_CRYPTO_result.txt" is the result in two virtual
>
> machines.
>
> * The sixth file "vm_perf.txt" is the "ib_send_bw - n 10000 192.168.122.228
>
> & perf record -ags sleep 10 & wait " result in the two virtual machines.
>
> On the other side, we tried to use the rxe command "rxe_cfg crc disable"

I don't see any parsing of "crc disable" in upstream variant of rxe_cfg
and there is no such module parameter in the kernel.

Thanks



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux