Where abouts are you maxing out? I assume it's not to the nodes but rather to the NFS head. If that's the case, you might want to look at a different, somewhat more time tested approach of standing up additional NFS heads (as you already mentioned). Having said that, we are currently using 10g links between a pair of Cisco 6509E's and another pair to a set of Foundry FESX48s (SAN traffic). No real issues to speak of, but this is backbone and not to the server. Our busiest machine, network wise, is our backup server and we've just bound a pair of 1 gig links to it with great success. Jake Grimmett wrote: > If I could ask question about 10Gbit ethernet.... > > We have a 70 node cluster built on Centos 5.1, using NFS to mount user home > areas. At the moment the network is a bottleneck, and it's going to get worse > as we add another 112 CPUs in the form of two blade servers. > > To help things breath better, we are considering building three new NFS > servers, and connecting these and the blade servers directly via 10Gbit to a > new core switch, the older nodes will stay with 1Gbit ethernet, but be given > new switches uplinked via 10Gbit to the core switch. > > Before I spend a great deal of money, can I ask if anyone here has experience > of 10GBit? Dell, HP, Supermicro, IBM (etc..) seem to be pushing the NetXen > PCIe cards, so I guess these drivers work...? As to media, although CX4 seems > cheaper than optical, I hear the cabling is nasty. And is the magical fairy > going to fix 10Gbit over cat6A anytime soon? > > any thoughts appreciated. > > Jake _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos