Re: Typical 10GbE latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mellanox is also doing ethernet now,

http://www.mellanox.com/page/products_dyn?product_family=163&mtag=sx1012
for example

- 220nsec for 40GbE
- 280nsec for 10GbE


And I think it's also possible to do Roce (rdma over ethernet) with mellanox connect-x3 adapters



----- Mail original ----- 

De: "Robert LeBlanc" <robert@xxxxxxxxxxxxx> 
À: "Stefan Priebe" <s.priebe@xxxxxxxxxxxx> 
Cc: ceph-users@xxxxxxxxxxxxxx 
Envoyé: Vendredi 7 Novembre 2014 16:00:40 
Objet: Re:  Typical 10GbE latency 


Infiniband has much lower latencies when performing RDMA and native IB traffic. Doing IPoIB adds all the Ethernet stuff that has to be done in software. Still it is comparable to Ethernet even with this disadvantage. Once Ceph has the ability to do native RDMA, Infiniband should have an edge. 
Robert LeBlanc 
Sent from a mobile device please excuse any typos. 
On Nov 7, 2014 4:25 AM, "Stefan Priebe - Profihost AG" < s.priebe@xxxxxxxxxxxx > wrote: 


Hi, 

this is with intel 10GBE bondet (2x10Gbit/s) network. 
rtt min/avg/max/mdev = 0.053/0.107/0.184/0.034 ms 

I thought that the mellanox stuff had lower latencies. 

Stefan 

Am 06.11.2014 um 18:09 schrieb Robert LeBlanc: 
> rtt min/avg/max/mdev = 0.130/0.157/0.190/0.016 ms 
> 
> IPoIB Mellanox ConnectX-3 MT27500 FDR adapter and Mellanox IS5022 QDR 
> switch MTU set to 65520. CentOS 7.0.1406 
> running 3.17.2-1.el7.elrepo.x86_64 on Intel(R) Atom(TM) CPU C2750 with 
> 32 GB of RAM. 
> 
> On Thu, Nov 6, 2014 at 9:46 AM, Udo Lembke < ulembke@xxxxxxxxxxxx 
> <mailto: ulembke@xxxxxxxxxxxx >> wrote: 
> 
> Hi, 
> no special optimizations on the host. 
> In this case the pings are from an proxmox-ve host to ceph-osds 
> (ubuntu + debian). 
> 
> The pings from one osd to the others are comparable. 
> 
> Udo 
> 
> On 06.11.2014 15:00, Irek Fasikhov wrote: 
>> Hi,Udo. 
>> Good value :) 
>> 
>> Whether an additional optimization on the host? 
>> Thanks. 
>> 
>> Thu Nov 06 2014 at 16:57:36, Udo Lembke < ulembke@xxxxxxxxxxxx 
>> <mailto: ulembke@xxxxxxxxxxxx >>: 
>> 
>> Hi, 
>> from one host to five OSD-hosts. 
>> 
>> NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 
>> (blade network). 
>> 
>> rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms 
>> rtt min/avg/max/mdev = 0.088/0.164/0.739/0.072 ms 
>> rtt min/avg/max/mdev = 0.081/0.141/0.229/0.030 ms 
>> rtt min/avg/max/mdev = 0.083/0.115/0.183/0.030 ms 
>> rtt min/avg/max/mdev = 0.087/0.144/0.190/0.028 ms 
>> 
>> 
>> Udo 
>> 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx <mailto: ceph-users@xxxxxxxxxxxxxx > 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 



_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux