buff/cache is the Linux kernel buffer and page cache which is unrelated to the ceph bluestore cache. Check the memory consumption of your individual OSD processes to confirm. Top also says 132GB available (since buffers and page cache entries will be dropped automatically if processes need more RAM) so your output certainly does not indicate memory pressure (quite the opposite actually). You can also inspect the memory buffers for individual OSDs with ceph daemon osd.X dump_mempools and ceph tell osd.X heap stats On Wed, 29 Sept 2021 at 09:47, Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote: > > Hello, > > I'm running ceph pacific OSD servers that are orchestratedy by cephadm (on > docker.io v20.10.8 on CentOS 7.9). The servers are a bit ... low equipped > than others when it comes to memory per OSD. > > Earlier, we were able to accommodate for this by using the following > /etc/ceph/ceph.conf setting: > > [osd] > osd memory target = 2147483648 > > However, since I have switched to cephadm and pacific (I guess this issue > is more related to the earlier than the latter), I'm seeing top/htop output > that indicates that 50% of my memory is used by processes and the other 50% > are used by "buff/cache". > > # free -h > total used free shared buff/cache > available > Mem: 251G 105G 24G 1.5G 121G > 132G > Swap: 7.8G 40M 7.7G > > I'm seeing issues such as mon slowing down and going out of quorum that I > saw earlier when memory was tight. Thus, I'm assuming that memory is the > issue here again... > > Thanks, > Manuel > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx