Michael Lowe writes: > I'd say always max out your mtu. ...if you can handle the complexity. I'm not saying large MTUs are rocket science, but you do have to make sure that all nodes in your logical IP network have the same MTU, and that all layer 2 devices pass large packets. And make sure that this remains that way as you add things to your cluster/network. > Your nic will push out the same amount of bits either way, it's just a > question of what fraction of the bits you push are headers vs > payloads. Don't overestimate the efficiency gains in terms of goodput vs. raw bandwidth. TCP over standard 1500-byte MTU Ethernet is already quite efficient for large transactions - more than 95% even with modern TCP options included. You can increase that to 99% with 9000-byte MTUs, but that's a modest gain and won't help you much if you're running into the bandwidth limitations of your GigE. Upgrade to 10GE (or multiple GigEs) instead. ~9000-byte MTUs bring down your packet rates by a factor of six (if most of the traffic is from large transfers). The switches don't care about this because they forward "in hardware" and can handle much smaller packets than 1500 bytes at line rate. Your servers will have less work to do, which can be very helpful if you're worried about running into CPU core or interrupt rate limitations. But again, don't overestimate the effect - modern GigE/10GE adapters and Linux network stacks have other effective methods to mitigate per-packet work: LRO/LSO, interrupt mitigation etc. I don't want this to sound overly negative - if you want to get the last bit of performance out of your Ceph cluster, go for large MTUs! 9000 bytes is safe with most switch vendors and modern network adapters (going beyond that it becomes tricky). But don't expect miracles... http://kb.pert.geant.net/PERTKB/JumboMTU might have some useful notes and references on large MTUs in general. -- Simon. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com