Hi all, I have a couple of very big s3 buckets that store temporary data. We keep writing to the buckets some files which are then read and deleted. They serve as a temporary storage. We're writing (and deleting) circa 1TB of data daily in each of those buckets, and their size has been mostly stable over time. The issue has arisen that radosgw-admin bucket stats says one bucket is 10T and the other is 4T; but s3cmd du (and I did a sync which agrees) says 3.5T and 2.3T respectively. The bigger bucket suffered from the orphaned objects bug (http://tracker.ceph.com/issues/18331). The smaller was created as 10.2.3 so it may also had the suffered from the same bug. Any ideas what could be at play here? How can we reduce actual usage? trimming part of the radosgw-admin bucket stats output "usage": { "rgw.none": { "size": 0, "size_actual": 0, "size_utilized": 0, "size_kb": 0, "size_kb_actual": 0, "size_kb_utilized": 0, "num_objects": 18446744073709551572 }, "rgw.main": { "size": 10870197197183, "size_actual": 10873866362880, "size_utilized": 18446743601253967400, "size_kb": 10615426951, "size_kb_actual": 10619010120, "size_kb_utilized": 18014398048099578, "num_objects": 1702444 }, "rgw.multimeta": { "size": 0, "size_actual": 0, "size_utilized": 0, "size_kb": 0, "size_kb_actual": 0, "size_kb_utilized": 0, "num_objects": 406462 } }, _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com