I was able to get the osdmaps to slowly trim (maybe 50 would trim with each change) by making small changes to the CRUSH map like this: for i in {1..100}; do ceph osd crush reweight osd.1754 4.00001 sleep 5 ceph osd crush reweight osd.1754 4 sleep 5 done I believe this was the solution Dan came across back in the hammer days. It works, but not ideal for sure. Across the cluster it freed up around 50TB of data! Bryan From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Bryan Stillwell <bstillwell@xxxxxxxxxxx> I have a cluster with over 1900 OSDs running Luminous (12.2.8) that isn't cleaning up old osdmaps after doing an expansion. This is even after the cluster became 100% active+clean: # find /var/lib/ceph/osd/ceph-1754/current/meta -name 'osdmap*' | wc -l 46181 With the osdmaps being over 600KB in size this adds up: # du -sh /var/lib/ceph/osd/ceph-1754/current/meta 31G /var/lib/ceph/osd/ceph-1754/current/meta I remember running into this during the hammer days: Did something change recently that may have broken this fix? Thanks, Bryan _______________________________________________ ceph-users mailing list |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com