Re: cephadm autotuning ceph-osd memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-02-12T09:00:41, Josh Durgin <jdurgin@xxxxxxxxxx> wrote:

> Containers don't have to have a memory cgroup limit. It may be helpful
> to avoid that with cephadm, perhaps using a strict memory limit in
> testing but not in production to avoid potential availability problems.

Instead of handling this externally (and fairly statically?) via
cephadm, how about the Ceph daemons on a node communicate to each other
about memory availability/allocation/requirements dynamically?

Then the "total" memory limit of Ceph on a node would be "per-pod". Ceph
would manage the resources allocated to it itself.

Say, going back to the point raised earlier, if an additional OSD is
started, all others would reduce their cache targets dynamically to make
room for that one (assuming there's enough space, otherwise the new
daemon wouldn't fully spin up and exit/pause).

The overall limits can even be enforced via the OS without k8s pods in
cgroups if so chosen.



Regards,
    Lars

-- 
SUSE Software Solutions Germany GmbH, MD: Felix Imendörffer, HRB 36809 (AG Nürnberg)
"Architects should open possibilities and not determine everything." (Ueli Zbinden)
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux