Limiting osd or buffer/cache memory with Pacific/cephadm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm running ceph pacific OSD servers that are orchestratedy by cephadm (on
docker.io v20.10.8 on CentOS 7.9). The servers are a bit ... low equipped
than others when it comes to memory per OSD.

Earlier, we were able to accommodate for this by using the following
/etc/ceph/ceph.conf setting:

[osd]
osd memory target = 2147483648

However, since I have switched to cephadm and pacific (I guess this issue
is more related to the earlier than the latter), I'm seeing top/htop output
that indicates that 50% of my memory is used by processes and the other 50%
are used by "buff/cache".

# free -h
              total        used        free      shared  buff/cache
available
Mem:           251G        105G         24G        1.5G        121G
 132G
Swap:          7.8G         40M        7.7G

I'm seeing issues such as mon slowing down and going out of quorum that I
saw earlier when memory was tight. Thus, I'm assuming that memory is the
issue here again...

Thanks,
Manuel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux