Hi. Thank you for the explanation. I get it now. Michal On 4/10/23 20:44, Alexander E. Patrakov wrote:
On Sat, Apr 8, 2023 at 2:26 PM Michal Strnad <michal.strnad@xxxxxxxxx> wrote:cluster: id: a12aa2d2-fae7-df35-ea2f-3de23100e345 health: HEALTH_WARN...pgs: 1656117639/32580808518 objects misplaced (5.083%)That's why the space is eaten. The stuff that eats the disk space on MONs is osdmaps, and the MONs have to keep old osdmaps back to the moment in the past when the cluster was 100% healthy. Note that osdmaps are also copied to all OSDs and eat space there, which is what you have seen. The relevant (but dangerous) configuration parameter is "mon_osd_force_trim_to". Better don't use it, and let your ceph cluster recover. If you can't wait, try to use upmaps to say that all PGs are fine where they are now, i.e that they are not misplaced. There is a script somewhere on GitHub that does this, but unfortunately I can't find it right now. -- Alexander E. Patrakov _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx