Greg, Do you have any estimation about how heartbeat messages use the network? How busy is it? At some point (if the cluster gets big enough), could this degrade the network performance? Will it make sense to have a separate network for this? So in addition to public and storage we will have an heartbeat network, so we could pin it to a specific network link. –––– Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien.han@xxxxxxxxxxxx Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 22 Jan 2014, at 19:01, Gregory Farnum <greg@xxxxxxxxxxx> wrote: > On Tue, Jan 21, 2014 at 8:26 AM, Sylvain Munaut > <s.munaut@xxxxxxxxxxxxxxxxxxxx> wrote: >> Hi, >> >> I noticed in the documentation that the OSD should use 3 ports per OSD >> daemon running and so when I setup the cluster, I originally opened >> enough port to accomodate this (with a small margin so that restart >> could proceed even is ports aren't released immediately). >> >> However today I just noticed that OSD daemons are using 5 ports and so >> for some of them, a port or two were locked by the firewall. >> >> All the OSD were still reporting as OK and the cluster didn't report >> anything wrong but I was getting some weird behavior that could have >> been related. >> >> >> So is that usage of 5 TCP ports normal ? And if it is, could the doc >> be updated ? > > Normal! It's increased a couple times recently because we added > heartbeating on both the public and cluster network interfaces. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com