I setup a test ceph+rgw instance on Debian with Hammer (0.94.5), filled
some buckets with objects and then deleted all of the buckets. What I
noticed is that my storage usage didn't go back down to zero.
Shows I don't have any buckets left:
# radosgw-admin bucket list
[]
Shows numerous objects left in the buckets pool.
# ceph df | grep -e NAME -e buckets
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.buckets.index 7 0 0 24751G 3328
.rgw.buckets 8 696M 0 24751G 23305
Looking at the deletion process, I had multiple radosgw-admin bucket rm
commands running at the same time for the same bucket which seems to be
the cause of the confusion. I additionally have bucket index sharding
turned on with "rgw override bucket index max shards = 64" which may be
a contributing factor based on some of the bucket delete error messages
I'm seeing.
I can reproduce this increase in objects left in rgw.buckets after
deleting a bucket at will with the attached scripts to fill and remove a
bucket.
The log files show that some commands succeed and some fail:
# cat *.log
2016-01-31 06:55:22.497010 7fe25e29a800 0 could not get bucket info for
bucket=testbuck
2016-01-31 06:55:07.927293 7f8d023a9800 0 ERROR:
open_bucket_index_shard() returned ret=-2
2016-01-31 06:54:42.741989 7faf3ea67800 -1 ERROR: could not remove
bucket testbuck
2016-01-31 06:54:47.725828 7f98bd879800 -1 ERROR: could not remove
bucket testbuck
2016-01-31 06:55:12.663067 7fd0b29d2800 0 could not get bucket info for
bucket=testbuck
2016-01-31 06:55:17.545500 7f35a370a800 0 could not get bucket info for
bucket=testbuck
This is not an issue for me at this point because it's a test cluster
that I can just recreate the rgw pools for, but it seems like a bug.
Kris Jurka
import boto
import boto.s3.connection
import boto.s3.key
access_key = 'x'
secret_key = 'y'
rgwhost = 'z'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = rgwhost,
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('testbuck')
bucket = conn.get_bucket('testbuck')
for i in range(1,1000):
k = boto.s3.key.Key(bucket)
k.key = str(i)
k.set_contents_from_string('abc123')
#!/bin/bash
for i in `seq 1 10`
do
radosgw-admin bucket rm --bucket testbuck --purge-objects > $i.log 2>&1 &
sleep 5
done
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com