Re: network layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 29, 2013 at 9:38 PM, James Harper
<james.harper@xxxxxxxxxxxxxxxx> wrote:
> My servers all have 4 x 1gb network adapters, and I'm presently using DRBD over a bonded rr link.
>
> Moving to ceph, I'm thinking for each server:
>
> eth0 - LAN traffic for server and VM's
> eth1 - "public" ceph traffic
> eth2+eth3 - LACP bonded for "cluster" ceph traffic
>
> I'm thinking LACP should work okay because there will be multiple connections generated by ceph so I don't need to do simple round robin or anything (which has overheads of its own).
>
> Am I assigning the right sort of weighting in giving the "cluster" network double the bandwidth of the "public" network? Or would it work better with eth[1-3] all in a single LACP bonded interface and put the public and cluster traffic together on that?

The cluster network will probably see more traffic, but it depends on
how you configure your cluster. If you're only doing 2x replication
you might be better off just bonding and sharing the links for
everything as in normal operation it'll be basically the same amount
in and out on both interfaces. This is especially true if your clients
are doing more reads than writes...
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux