Re: OSDs do not respect my memory tune limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the hint, I tried turning that off:

$ sudo ceph osd pool get cephfs_data pg_autoscale_mode
pg_autoscale_mode: on
$ sudo ceph osd pool set cephfs_data pg_autoscale_mode  off
set pool 9 pg_autoscale_mode to off
$ sudo ceph osd pool get cephfs_data pg_autoscale_mode
pg_autoscale_mode: off
$ sudo ceph osd pool set cephfs_data pg_num 16
$ sudo ceph osd pool get cephfs_data pg_num
pg_num: 128

Am Fr., 2. Dez. 2022 um 14:30 Uhr schrieb Anthony D'Atri <
anthony.datri@xxxxxxxxx>:

> Could be that you’re fighting with the autoscaler?
>
> > On Dec 2, 2022, at 4:58 AM, Daniel Brunner <daniel@brunner.ninja> wrote:
> >
> > Can I get rid of PGs after trying to decrease the number on the pool
> again?
> >
> > Doing a backup and nuking the cluster seems a little too much work for
> me :)
> >
> >
> > $ sudo ceph osd pool get cephfs_data pg_num
> > pg_num: 128
> > $ sudo ceph osd pool set cephfs_data pg_num 16
> > $ sudo ceph osd pool get cephfs_data pg_num
> > pg_num: 128
> >
> > Am Fr., 2. Dez. 2022 um 10:22 Uhr schrieb Janne Johansson <
> > icepic.dz@xxxxxxxxx>:
> >
> >>> my OSDs are running odroid-hc4's and they only have about 4GB of
> memory,
> >>> and every 10 minutes a random OSD crashes due to out of memory. Sadly
> the
> >>> whole machine gets unresponsive when the memory gets completely full,
> so
> >> no
> >>> ssh access or prometheus output in the meantime.
> >>
> >>> I've set the memory limit very low on all OSDs:
> >>>
> >>> for i in {0..17} ; do sudo ceph config set osd.$i osd_memory_target
> >>> 939524096 ; done     which is the absolute minimum, about 0.9GB.
> >>>
> >>> Why are the OSDs not respecting this limit?
> >>
> >> The memory limit you set with osd_memory_target is about the parts
> >> that _can_ scale up and down memory usage, like read caches and so
> >> forth, but it is not all of the needed RAM to run an OSD with many
> >> PGs/Objects.  If the box is too small, it is too small.
> >>
> >> --
> >> May the most significant bit of your life be positive.
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux