Le 04/06/2012 10:23, Stefan Majer a écrit :
Hi Hannes,
our production environment is running on 10GB infrastructure. We had a
lot of troubles till we got to where we are today.
We use Intel X520 D2 cards on our OSD´s and nexus switch
infrastructure. All other cards we where testing failed horrible.
we have Intel Corporation 82599EB 10 Gigabit Dual Port Backplane
Connection (rev 01)... Don't know the 'commercial name'. ixgbe driver.
Some of the problems we encountered have been:
- page allocation failures in the ixgbe driver --> fixed in upstream
- problems with jumbo frames, we had to disable tso, gro, lro -- >
this is the most obscure thing
- various tuning via sysctl in the net.tcp and net.ipv4 area --> this
was also the outcome of stefan´s benchmarking odysee.
some tuning we made :
-> Turning off Virtualisation extension in BIOS. Don't know why, but it
gaves us crappy performance. We usually put it on, because we use KVM a
lot. In our case, OSD are in bare metal and disabling virtualisation
extension gives us a very big boost.
It may be a BIOS bug in our machines (DELL M610).
-> One of my colleague played with receive flow steeting ; the intel
card supports multi queue, so it seems we can gain a little with it :
!/bin/sh
for x in $(seq 0 23); do echo FFFFFFFF >
/sys/class/net/eth2/queues/rx-${x}/rps_cpus; done
echo 16384 > /proc/sys/net/core/rps_sock_flow_entries
for x in $(seq 0 23); do echo 16384 >
/sys/class/net/eth2/queues/rx-${x}/rps_flow_cnt; done
But after all this we a quite happy actully and are only limited by
the speed of the drives (2TB SATA).
The fsync is a fdatasync in fact which is available in newer glibc. If
you dont use btrfs (we use xfs) you need to use a recent glibc with
fdatasync support.
Does it may explain why we see loosy performance with xfs right now ?
That the main reason we're stuck with btrfs for the moment.
we're using debian 'stable' : libc is
libc6 2.11.3-3
probably too old ?
Cheers,
--
Yann Dupont - Service IRTS, DSI Université de Nantes
Tel : 02.53.48.49.20 - Mail/Jabber : Yann.Dupont@xxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html