Re: OSD port usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday, January 24, 2014, Sebastien Han <sebastien.han@xxxxxxxxxxxx> wrote:
Greg,

Do you have any estimation about how heartbeat messages use the network?
How busy is it?

Not very. It's one very small message per OSD peer per...second?
 

At some point (if the cluster gets big enough), could this degrade the network performance? Will it make sense to have a separate network for this?

As Sylvain said, that would negate the entire point of heartbeating on both networks. Trust me, you don't want to deal with a cluster where the OSDs can't talk to each other but they can talk to the monitors and keep marking each other down.
-Greg
 

So in addition to public and storage we will have an heartbeat network, so we could pin it to a specific network link.

––––
Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood.”

Phone: +33 (0)1 49 70 99 72
Mail: sebastien.han@xxxxxxxxxxxx
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance

On 22 Jan 2014, at 19:01, Gregory Farnum <greg@xxxxxxxxxxx> wrote:

> On Tue, Jan 21, 2014 at 8:26 AM, Sylvain Munaut
> <s.munaut@xxxxxxxxxxxxxxxxxxxx> wrote:
>> Hi,
>>
>> I noticed in the documentation that the OSD should use 3 ports per OSD
>> daemon running and so when I setup the cluster, I originally opened
>> enough port to accomodate this (with a small margin so that restart
>> could proceed even is ports aren't released immediately).
>>
>> However today I just noticed that OSD daemons are using 5 ports and so
>> for some of them, a port or two were locked by the firewall.
>>
>> All the OSD were still reporting as OK and the cluster didn't report
>> anything wrong but I was getting some weird behavior that could have
>> been related.
>>
>>
>> So is that usage of 5 TCP ports normal ? And if it is, could the doc
>> be updated ?
>
> Normal! It's increased a couple times recently because we added
> heartbeating on both the public and cluster network interfaces.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux