On 21/09/2020 5:40 am, Stefan Kooman wrote:
My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph uses lots of tcp connections, and those can get shifted (balanced) between interfaces depending on load.
Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster with 19 OSD's. 20 Active VM's and it idles at under 1 MiB/s, spikes up to 100MiB/s no problem. When doing a heavy rebalance/repair data rates on any one node can hit 400MiBs+
It scales out really well. -- Lindsay _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx