Infiniband special ops?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Hi guys.

Hoping some net experts my stumble upon this message, I have an IPoIB direct host to host connection and:

-> $ ethtool ib1
Settings for ib1:
    Supported ports: [  ]
    Supported link modes:   Not reported
    Supported pause frame use: No
    Supports auto-negotiation: No
    Supported FEC modes: Not reported
    Advertised link modes:  Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Auto-negotiation: on
    Port: Other
    PHYAD: 255
    Transceiver: internal
    Link detected: yes

and that's both ends, both hosts, yet:

> $ iperf3 -c 10.5.5.97
Connecting to host 10.5.5.97, port 5201
[  5] local 10.5.5.49 port 56874 connected to 10.5.5.97 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.36 GBytes  11.6 Gbits/sec    0   2.50 MBytes [  5]   1.00-2.00   sec  1.87 GBytes  16.0 Gbits/sec    0   2.50 MBytes [  5]   2.00-3.00   sec  1.84 GBytes  15.8 Gbits/sec    0   2.50 MBytes [  5]   3.00-4.00   sec  1.83 GBytes  15.7 Gbits/sec    0   2.50 MBytes [  5]   4.00-5.00   sec  1.61 GBytes  13.9 Gbits/sec    0   2.50 MBytes [  5]   5.00-6.00   sec  1.60 GBytes  13.8 Gbits/sec    0   2.50 MBytes [  5]   6.00-7.00   sec  1.56 GBytes  13.4 Gbits/sec    0   2.50 MBytes [  5]   7.00-8.00   sec  1.52 GBytes  13.1 Gbits/sec    0   2.50 MBytes [  5]   8.00-9.00   sec  1.52 GBytes  13.1 Gbits/sec    0   2.50 MBytes [  5]   9.00-10.00  sec  1.52 GBytes  13.1 Gbits/sec    0   2.50 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.2 GBytes  13.9 Gbits/sec 0             sender [  5]   0.00-10.00  sec  16.2 GBytes  13.9 Gbits/sec                  receiver

It's rather an oldish platform which hosts the link, PCIe is only 2.0 but with link of x8 that should be able to carry more than ~13Gbits/sec.
Infiniband is Mellanox's ConnectX-3.

Any thoughts on how to track the bottleneck or any thoughts I'll appreciate much.
thanks, L
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux