Hello, I have ceph pool (size=2) and three RBD images (2TB each) and I have noticed, that deleting data does not reclaim data on ceph. What I have done is mount RBD device as a loopback device loop0, mounted it to a filesystem, ran “fstrim” and got response: /mnt/fstrim/: 1.1 TiB (1167710830592 bytes) trimmed /mnt/fstrim/: 1 TiB (1125871489024 bytes) trimmed /mnt/fstrim/: 1.1 TiB (1186485440512 bytes) trimmed Now that would be almost perfect if not the fact, that space was not reclaimed on ceph itself. Using “ceph osd df” I still see same usage even after waiting few hours and I don’t see any action. I’m using ceph v0.94-2 and kernel 3.10 What is the proper way to reclaim unused space? I had pool size 3, but had to lower it to two, to avoid getting my OSDs full. Thanks.-- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html