Recovering from broken sharding: fill_status OVER 100%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On my test servers, I created a bucket using 12.2.5, turned on versioning, uploaded 100,000 objects, and the bucket broke, as expected.  Autosharding said it was running but didn't complete.

Then I upgraded that cluster to 12.2.7.  Resharding seems to have finished, but now that cluster says it has *300,000* objects, instead of 100,000.  But an S3 list shows 100,000 objects.

How do I fix this?  We have a production cluster that has a similar bucket.

I have tried both "bucket check" and "bucket check --check-objects" and they just return []


$ /usr/local/bin/aws --endpoint-url http://test/ --profile test s3 ls s3://test2/ | wc -l
100003

$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "test",
        "buckets": [
...
            {
                "bucket": "test2",
                "tenant": "",
                "num_objects": 300360,
                "num_shards": 2,
                "objects_per_shard": 150180,
                "fill_status": "OVER 100.000000%"
            }
        ]
    }
]

$ sudo radosgw-admin reshard status --bucket test2
[
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    },
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    }
]


Thanks,

Sean
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux