Good value :)
Whether an additional optimization on the host?
Thanks.
Thu Nov 06 2014 at 16:57:36, Udo Lembke <ulembke@xxxxxxxxxxxx>:
Hi,
from one host to five OSD-hosts.
NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 (blade network).
rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms
rtt min/avg/max/mdev = 0.088/0.164/0.739/0.072 ms
rtt min/avg/max/mdev = 0.081/0.141/0.229/0.030 ms
rtt min/avg/max/mdev = 0.083/0.115/0.183/0.030 ms
rtt min/avg/max/mdev = 0.087/0.144/0.190/0.028 ms
Udo
Am 06.11.2014 14:18, schrieb Wido den Hollander:
> Hello,
>
> While working at a customer I've ran into a 10GbE latency which seems
> high to me.
>
> I have access to a couple of Ceph cluster and I ran a simple ping test:
>
> $ ping -s 8192 -c 100 -n <ip>
>
> Two results I got:
>
> rtt min/avg/max/mdev = 0.080/0.131/0.235/0.039 ms
> rtt min/avg/max/mdev = 0.128/0.168/0.226/0.023 ms
>
> Both these environment are running with Intel 82599ES 10Gbit cards in
> LACP. One with Extreme Networks switches, the other with Arista.
>
> Now, on a environment with Cisco Nexus 3000 and Nexus 7000 switches I'm
> seeing:
>
> rtt min/avg/max/mdev = 0.160/0.244/0.298/0.029 ms
>
> As you can see, the Cisco Nexus network has high latency compared to the
> other setup.
>
> You would say the switches are to blame, but we also tried with a direct
> TwinAx connection, but that didn't help.
>
> This setup also uses the Intel 82599ES cards, so the cards don't seem to
> be the problem.
>
> The MTU is set to 9000 on all these networks and cards.
>
> I was wondering, others with a Ceph cluster running on 10GbE, could you
> perform a simple network latency test like this? I'd like to compare the
> results.
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com