Dear Ceph users, Our CephFS is not releasing/freeing up space after deleting hundreds of terabytes of data. By now, this drives us in a "nearfull" osd/pool situation and thus throttles IO. We are on ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable). Recently, we moved a bunch of data to a new pool with better EC. This was done by adding a new EC pool to the FS. Then assigning the FS root to the new EC pool via the directory layout xattr (so all new data is written to the new pool). And finally copying old data to new folders. I swapped the data as follows to remain the old directory structures. I also made snapshots for validation purposes. So basically: cp -r mymount/mydata/ mymount/new/ # this creates copy on new pool mkdir mymount/mydata/.snap/tovalidate mkdir mymount/new/mydata/.snap/tovalidate mv mymount/mydata/ mymount/old/ mv mymount/new/mydata mymount/ I could see the increase of data in the new pool as expected (ceph df). I compared the snapshots with hashdeep to make sure the new data is alright. Then I went ahead deleting the old data, basically: rmdir mymount/old/mydata/.snap/* # this also included a bunch of other older snapshots rm -r mymount/old/mydata At first we had a bunch of PGs with snaptrim/snaptrim_wait. But they are done for quite some time now. And now, already two weeks later the size of the old pool still hasn't really decreased. I'm still waiting for around 500 TB to be released (and much more is planned). I honestly have no clue, where to go from here. From my point of view (i.e. the CephFS mount), the data is gone. I also never hard/soft-linked it anywhere. This doesn't seem to be a regular issue. At least I couldn't find anything related or resolved in the docs or user list, yet. If anybody has an idea how to resolve this, I would highly appreciate it. Best Wishes, Mathias _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx