Re: rbd kernel block driver memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> 
>> There is a socket open to each OSD (object storage daemon).

I’ve always understood that there were *two* to each OSD, was I misinformed?

>>  A Ceph cluster may have tens, hundreds or even thousands of OSDs (although the
>> latter is rare -- usually folks end up with several smaller clusters
>> instead a single large cluster).

… though if a client has multiple RBD volumes attached, it may be talking to more than one cluster.  I’ve seen a client exhaust the file descriptor limit on a hypervisor doing this after a cluster expansion.

>> A thing to note is that, by default, OSD sessions are shared between
>> RBD devices.  So as long as all RBD images that are mapped on a node
>> belong to the same cluster, the same set of sockets would be used.

Before … Luminous was it? AIUI they weren’t pooled, so older releases may have higher consumption.
> 





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux