Re: Setting up a small experimental CEPH network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
I tested something in the past[1] where I could notice that an osd 
staturated a bond link and did not use the available 2nd one. I think I 
maybe made a mistake in writing down it was a 1x replicated pool. 
However it has been written here multiple times that these osd processes 
are single thread, so afaik they cannot use more than on link, and at 
the moment your osd has a saturated link, your clients will notice this.


[1]
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg35474.html



-----Original Message-----
From: Lindsay Mathieson [mailto:lindsay.mathieson@xxxxxxxxx] 
Sent: maandag 21 september 2020 2:42
To: ceph-users@xxxxxxx
Subject:  Re: Setting up a small experimental CEPH network

On 21/09/2020 5:40 am, Stefan Kooman wrote:
> My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph 

> uses lots of tcp connections, and those can get shifted (balanced) 
> between interfaces depending on load.

Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster 
with 19 OSD's. 20 Active VM's and it idles at under 1 MiB/s, spikes up 
to 100MiB/s no problem. When doing a heavy rebalance/repair data rates 
on any one node can hit 400MiBs+


It scales out really well.

--
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux