Ok, seems like it doesn’t go below 600MB out of the 256GB, let’s wait until the pg_degradation healed. Did I do something wrong? I set in the global config the bluefs option, and restarted ceph.target on the osd node :/ ? Doe this need some special thing to apply? Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx> --------------------------------------------------- From: Konstantin Shalygin <k0ste@xxxxxxxx> Sent: Friday, May 14, 2021 3:26 PM To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> Cc: ceph-users@xxxxxxx Subject: Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true Nope, kernel reserves enough memory to free on pressure, for example 36OSD 0.5TiB RAM host: total used free shared buff/cache available Mem: 502G 168G 2.9G 18M 331G 472G Swap: 952M 248M 704M k On 14 May 2021, at 11:20, Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>> wrote: When this stop 😃 ? When died … :D ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx