I'm worried that data deleted in radosgw wasn't actually deleted from disk/cluster. Here's the output using df: /dev/xvdf 1000G 779G 185G 81% /var/lib/ceph/osd/ceph-0 Quite full that disk. Now for ceph -s I get health HEALTH_OK <lines removed> pgmap v256604: 2304 pgs: 2304 active+clean; 1103 GB data, 1547 GB used, 378 GB / 2000 GB avail <lines removed> Still looks pretty full here. And here's finally the output when checking the only bucket we have: { "bucket": "<bucket-name-removed>", "pool": ".rgw.buckets", "id": "4122.1", "marker": "4122.1", "owner": "<owner-removed>", "usage": { "rgw.main": { "size_kb": 247104513, "size_kb_actual": 247345748, "num_objects": 108889}}} This translates to around 236GB which is FAR from the around 770GB that df and ceph -s reports. The thing is - the only way we're storing data in ceph is through radosgw and the only bucket we have is the one shown above (yes a pretty simple deployment). How can the stats be so very different? Was data not actually deleted from disk? The deletion took place yesterday so the cluster has had some time to do any delayed deletion if that's how it's done. Any ideas? Thanks! John -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html