> > Hi List, > Each of my osd nodes has 5 network Gb adapters, and has many osds, one > disk one osd. They are all connected with a Gb switch. > Currently I can get an average 100MB/s of read/write speed. To improve the > throughput further, the network bandwidth will be the bottleneck, right? Do you already have separate networks for public and cluster? > > I can't afford to replace all the adapters and switch with 10Gb ones. How can I > improve the throughput based on current gears? > > My first thought is to use bonding as we have multiple adapters. But bonding > has performance cost, surely cannot multiplex the throughput. And it has > dependency on the switch. LACP bonding should be okay. Each connection will only be 1gbit/second but if you have multiple clients and multiple connections you could see improved performance. If you want to use plain round robin ordering at layer two, play with net/ipv4/tcp_reordering value to improve things. I do this and iperf gives me 2gbit/second throughput, but with an increase in cpu of course. > My second thought is to group the adapters and osds. For example, we have > three adapters called A1, A2, A3, and 6 osds called O1, O2,..., O6. let O1 & O2 > use A1 exclusively, O3 & O4 use A2 exclusively, O5 & O6 use A3 exclusively. > So they are separated groups, each group has its own disks, adapters, which > are not shared. Only CPU & memory resource is shared between groups. > I tested something similar. Each server has two disks and 2 adapters assigned to the cluster network. Each adapter is on a different subnet. As long as each osd can reach each ip address (because it has adapters on both networks) it should be fine and is probably better than bonding. Actual multipath would be nice for the public network, but LACP should give you an aggregate increase even if individual connections are still limited to the adapter link speed. James _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com