Is repairing an RGW bucket index broken?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm wondering if the 'radosgw-admin bucket check --fix' command is broken in Luminous (12.2.8)?

I'm asking because I'm trying to reproduce a situation we have on one of our production clusters and it doesn't seem to do anything.  Here's the steps of my test:

1. Create a bucket with 1 million objects
2. Verify the bucket got sharded into 10 shards of (100,000 objects each)
3. Remove one of the shards using the rados command
4. Verify the bucket is broken
5. Attempt to fix the bucket

I got as far as step 4:

# rados -p .rgw.buckets.index ls | grep "default.1434737011.12485" | sort
.dir.default.1434737011.12485.0
.dir.default.1434737011.12485.1
.dir.default.1434737011.12485.2
.dir.default.1434737011.12485.3
.dir.default.1434737011.12485.4
.dir.default.1434737011.12485.5
.dir.default.1434737011.12485.6
.dir.default.1434737011.12485.8
.dir.default.1434737011.12485.9
# radosgw-admin bucket list --bucket=bstillwell-1mil
ERROR: store->list_objects(): (2) No such file or directory

But step 5 is proving problematic:

# time radosgw-admin bucket check --fix --bucket=bstillwell-1mil

real	0m0.201s
user	0m0.105s
sys	0m0.033s

# time radosgw-admin bucket check --fix --check-objects --bucket=bstillwell-1mil

real	0m0.188s
user	0m0.102s
sys	0m0.025s


Could someone help me figure out what I'm missing?

Thanks,
Bryan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux