Re: OSD memory usage after cephadm adoption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

now that host masks seem to work, could somebody please shed some light at the relative priority of these settings:

ceph config set osd memory_target X
ceph config set osd/host:A memory_target Y
ceph config set osd/class:B memory_target Z

Which one wins for an OSD on host A in class B?

Similar for an explicit ID. The expectation is that a setting for OSD.ID always wins. Then the masked values, then the generic osd setting, then the globals and last the defaults. The relative precedence of masked values is not defined anywhere, nor the precedence in general.

This is missing in even the latest docs: https://docs.ceph.com/en/quincy/rados/configuration/ceph-conf/#sections-and-masks . Would be great if someone could add this.

Thanks!
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Luis Domingues <luis.domingues@xxxxxxxxx>
Sent: Monday, July 17, 2023 9:36 AM
To: Sridhar Seshasayee
Cc: Mark Nelson; ceph-users@xxxxxxx
Subject:  Re: OSD memory usage after cephadm adoption

It looks indeed to be that bug that I hit.

Thanks.

Luis Domingues
Proton AG


------- Original Message -------
On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee <sseshasa@xxxxxxxxxx> wrote:


> Hello Luis,
>
> Please see my response below:
>
> But when I took a look on the memory usage of my OSDs, I was below of that
>
> > value, by quite a bite. Looking at the OSDs themselves, I have:
> >
> > "bluestore-pricache": {
> > "target_bytes": 4294967296,
> > "mapped_bytes": 1343455232,
> > "unmapped_bytes": 16973824,
> > "heap_bytes": 1360429056,
> > "cache_bytes": 2845415832
> > },
> >
> > And if I get the running config:
> > "osd_memory_target": "4294967296",
> > "osd_memory_target_autotune": "true",
> > "osd_memory_target_cgroup_limit_ratio": "0.800000",
> >
> > Which is not the value I observe from the config. I have 4294967296
> > instead of something around 7219293672. Did I miss something?
>
> This is very likely due to https://tracker.ceph.com/issues/48750. The fix
> was recently merged into
> the main branch and should be backported soon all the way to pacific.
>
> Until then, the workaround would be to set the osd_memory_target on each
> OSD individually to
> the desired value.
>
> -Sridhar
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux