k8s kernel clients: reasonable number of mounts per host, and limiting num client sessions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Context: one of our users is mounting 350 ceph kernel PVCs per 30GB VM
and they notice "memory pressure".

When planning for k8s hosts, what would be a reasonable limit on the
number of ceph kernel PVCs to mount per host? If one kernel mounts the
same cephfs several times (with different prefixes), we observed that
this is a unique client session. But does the ceph module globally
share a single copy of cluster metadata, e.g. osdmaps, or is that all
duplicated per session? Can anyone estimate how much memory is
consumed by each mount (assuming it is a client of an O(1k) osd ceph
cluster)?

Also, k8s makes it trivial for a user to mount a single PVC from
hundreds or thousands of clients. Suppose we wanted to be able to
limit the number of clients per PVC -- Do you think a new
`max_sessions=N` cephx cap would be the best approach for this?

Best Regards,

Dan
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux