My servers all have 4 x 1gb network adapters, and I'm presently using DRBD over a bonded rr link. Moving to ceph, I'm thinking for each server: eth0 - LAN traffic for server and VM's eth1 - "public" ceph traffic eth2+eth3 - LACP bonded for "cluster" ceph traffic I'm thinking LACP should work okay because there will be multiple connections generated by ceph so I don't need to do simple round robin or anything (which has overheads of its own). Am I assigning the right sort of weighting in giving the "cluster" network double the bandwidth of the "public" network? Or would it work better with eth[1-3] all in a single LACP bonded interface and put the public and cluster traffic together on that? Thanks James _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com