Re: Reef osd_memory_target and swapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Oct 15, 2024, at 1:06 PM, Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote:
> 
> Hello.
> 
> I'm seeing the following in the Dashboard  -> Configuration panel
> for osd_memory_target:
> 
> Default:
> 4294967296
> 
> Current Values:
> osd: 9797659437,
> osd: 10408081664,
> osd: 11381160192,
> osd: 22260320563
> 
> I have 4 hoists in the cluster right now - all OSD+MGR+MON.  3 have 128GB
> RAM, the 4th has 256GB.

https://docs.ceph.com/en/reef/cephadm/services/osd/#automatically-tuning-osd-memory

You have autotuning enabled, and it’s trying to use all of your physmem.  I don’t know offhand how Ceph determines the amount of available memory, if it looks specifically for physmem or if it only looks at vmem.  If it looks at vmem that arguably could be a bug


>  On the host with 256GB, top shows some OSD
> processes with very high VIRT and RES values - the highest VIRT OSD has
> 13.0g.  The highest RES is 8.5g.
> 
> All 4 systems are currently swapping, but the 256GB system has much higher
> swap usage.
> 
> I am confused why I have 4 current values for osd_memory_target, and
> especially about the 4th one at 22GB.
> 
> Also, I'm recalling that there might be a recommendation to disable swap.
> and I could easily do 'swapoff -a' when the swap usage is lower than the
> free RAM.

I tend to advise not using swap at all.  Suggest disabling swap in fstab, then serially rebooting your OSD nodes, of course waiting for recovery between each before proceeding to the next. 

> 
> Can anybody shed any light on this?
> 
> Thanks.
> 
> -Dave
> 
> --
> Dave Hall
> Binghamton University
> kdhall@xxxxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux