Re: Fwd: Active-Active MDS RAM consumption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kamil,

Seems like this issue <https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/RUD2VNTIM2PJVAQYY6FVIPP6HYXUW4AC/> where the MDS loads all direntries into memory. You should probably take a look at the mds_oft_prefetch_dirfrags setting (which changed default from true to false in 16.2.8).

Best regards,
Gerdriaan Mulder

On 01/09/2022 14.39, Kamil Madac wrote:
Hi Ceph Community

One of my customer has an issue with the MDS cluster. Ceph cluster is
deployed with cephadm and is in version 16.2.7. As soon as MDS is switched
from Active-Standby to Active-Active-Standby, MDS daemon starts to consume
a lot of RAM. After some time it consumes 48GB RAM, and container engine
kills it. Same thing happens then on the second node which is killed after
some time as well, and the situation repeats again.

When the MDS cluster is switched back to Active-Backup MDS configuration
the situation stabilizes.

mds_cache_memory_limit is set to 4294967296, which is the default value. No
health warning about high cache consumption is generated.

Is that known behavior, and can it be solved by some reconfiguration?

Can someone give us a hint on what to check, debug or tune?

Thank you.

Kamil Madac
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux