Re: Understanding the number of TCP connections between clients and OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hope this will be helpful..

 

Total connections per osd = (Target PGs per osd) * (# of pool replicas)

* 3 + (2 #clients) + (min_hb_peer)

 

# of pool replicas = configurable, default is 3

3 = is number of data communication messengers (cluster, hb_backend,

hb_frontend)

min_hb_peer = default is 20 I guess..

Total number connections per node: total connections per osd * number of osds per node

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Rick Balsano
Sent: Wednesday, November 04, 2015 12:28 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Understanding the number of TCP connections between clients and OSDs

 

Just following up since this thread went silent after a few comments showing similar concerns, but no explanation of the behavior. Can anyone point to some code or documentation which explains how to estimate the expected number of TCP connections a client would open based on read/write volume, # of volumes, # of OSDs in the pool, etc?

 

 

On Tue, Oct 27, 2015 at 5:05 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

On Mon, Oct 26, 2015 at 10:48 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
> If we're talking about RBD clients (qemu) then the number also grows with
> number of volumes attached to the client.

I never thought about that but it might explain a problem we have
where multiple attached volumes crashes an HV. I had assumed that
multiple volumes would reuse the same rados client instance, and thus
reuse the same connections to the OSDs.

-- dan



 

--

Rick Balsano

Senior Software Engineer
Opower

O +1 571 384 1210
We're Hiring! See jobs
here.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux