Re: rbd kernel block driver memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 26, 2023 at 5:08 PM Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
>
> >>
> >> There is a socket open to each OSD (object storage daemon).
>
> I’ve always understood that there were *two* to each OSD, was I misinformed?

Hi Anthony,

It looks like you were misinformed -- there is just one client -> OSD
socket.

>
> >>  A Ceph cluster may have tens, hundreds or even thousands of OSDs (although the
> >> latter is rare -- usually folks end up with several smaller clusters
> >> instead a single large cluster).
>
> … though if a client has multiple RBD volumes attached, it may be talking to more than one cluster.  I’ve seen a client exhaust the file descriptor limit on a hypervisor doing this after a cluster expansion.
>
> >> A thing to note is that, by default, OSD sessions are shared between
> >> RBD devices.  So as long as all RBD images that are mapped on a node
> >> belong to the same cluster, the same set of sockets would be used.
>
> Before … Luminous was it? AIUI they weren’t pooled, so older releases may have higher consumption.

No, this behavior goes back to when RBD was introduced in 2010.  It has
always been enabled by default so nothing changed in this regard around
Luminous.

Thanks,

                Ilya




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux