On 2014-11-27 11:36, Yehuda Sadeh wrote:
On Wed, Nov 26, 2014 at 3:49 PM, b <b@benjackson.email> wrote:
On 2014-11-27 10:21, Yehuda Sadeh wrote:
On Wed, Nov 26, 2014 at 3:09 PM, b <b@benjackson.email> wrote:
On 2014-11-27 09:38, Yehuda Sadeh wrote:
On Wed, Nov 26, 2014 at 2:32 PM, b <b@benjackson.email> wrote:
I've been deleting a bucket which originally had 60TB of data in
it,
with
our cluster doing only 1 replication, the total usage was 120TB.
I've been deleting the objects slowly using S3 browser, and I can
see
the
bucket usage is now down to around 2.5TB or 5TB with duplication,
but
the
usage in the cluster has not changed.
I've looked at garbage collection (radosgw-admin gc list --include
all)
and
it just reports square brackets "[]"
I've run radosgw-admin temp remove --date=2014-11-20, and it
doesn't
appear
to have any effect.
Is there a way to check where this space is being consumed?
Running 'ceph df' the USED space in the buckets pool is not
showing any
of
the 57TB that should have been freed up from the deletion so far.
Running 'radosgw-admin bucket stats | jshon | grep size_kb_actual'
and
adding up all the buckets usage, this shows that the space has
been
freed
from the bucket, but the cluster is all sorts of messed up.
ANY IDEAS? What can I look at?
Can you run 'radosgw-admin gc list --include-all'?
Yehuda
I've done it before, and it just returns square brackets [] (see
below)
radosgw-admin gc list --include-all
[]
Do you know which of the rados pools have all that extra data? Try to
list that pool's objects, verify that there are no surprises there
(e.g., use 'rados -p <pool> ls').
Yehuda
I'm just running that command now, and its taking some time. There is
a
large number of objects.
Once it has finished, what should I be looking for?
I assume the pool in question is the one that holds your objects data?
You should be looking for objects that are not expected to exist
anymore, and objects of buckets that don't exist anymore. The problem
here is to identify these.
I suggest starting by looking at all the existing buckets, compose a
list of all the bucket prefixes for the existing buckets, and try to
look whether there are objects that have different prefixes.
Yehuda
Any ideas? I've found the prefix, the number of objects in the pool that
match that prefix numbers in the 21 millions
The actual 'radosgw-admin bucket stats' command reports it as only
having 1.2 million.
Not sure where to go from here, and our cluster is slowly filling up
while not clearing any space.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com