Re: connection management in ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 8 Mar 2011, H Chang wrote:
> I have a question on how ceph maintains tcp connections.
> 
> As I understand, given a cluster of N osds, ceph client can
> potentially read from, and write to all N osds, if crush distributes
> data randomly across all. 

Right.

> Then does the client maintains persistent tcp connections to all osds?

The kernel client closes out idle connections.  The userspace 
implementation does not do that yet.  

> How about tcp connections among osds (for replication)?  Does each osd
> maintain persistent tcp connection to all the other osds, essentially
> forming a clique?

The OSDs maintain open connections to the nodes they peer with.  This is 
related to the number of PGs they store.  So in a 10,000 node cluster, if 
an osd has ~100 PGs, it will have ~100 peer connections (for 2x 
replication).  For a 10 node system, each node will typically peer with 
every other node.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux