Re: k8s kernel clients: reasonable number of mounts per host, and limiting num client sessions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 6, 2021 at 1:45 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
>
> On Tue, 2021-04-06 at 12:32 +0200, Dan van der Ster wrote:
> > On Mon, Apr 5, 2021 at 8:33 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > >
> > > On Thu, 2021-04-01 at 11:04 +0200, Dan van der Ster wrote:
> > > > Hi,
> > > >
> > > > Context: one of our users is mounting 350 ceph kernel PVCs per 30GB VM
> > > > and they notice "memory pressure".
> > > >
> > >
> > > Manifested how?
> >
> > Our users lost the monitoring, so we are going to try to reproduce to
> > get more details.
> > Do you know any way to see how much memory is used by the kernel
> > clients? (Aside from the ceph_inode_info and ceph_dentry_info which I
> > see in slabtop).
>
> Nothing simple, I'm afraid, and even those don't tell you the full
> picture. ceph_dentry_info is a separate allocation from the actual
> dentry.

I've just created 1000 cephx users and mounted a largish cluster 1000
times from a single 8GB VM. I saw the used memory increase by around
1GB after the mounts were completed, and that memory is freed after I
umount.
The path that I mounted has an unpacked linux tarball. I ran 'find /'
and 'md5sum linux.tgz' across 200 of those mounts simultaneously, and
they all completed quickly, uneventfully, without any noticeable
impact on memory consumption, (aside from what would be expected in
the dentry and page cache).

So, I'm concluding that this whole thread was noise; we can support
hundreds of mounts per host without concern.

Thanks for your time, Best Regards, Dan
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux