bucket limit check is 3x actual objects after autoreshard/upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


I was testing versioning and autosharding in luminous 12.2.5 upgrading to 12.2.7  I wanted to know if the upgraded autosharded bucket is still usable.  Looks like it is, but a bucket limit check seems to show too many objects.


On my test servers, I created a bucket using 12.2.5, turned on versioning and autosharding, uploaded 100,000 objects, and bucket uploads hung, as is known.  Autosharding said it was running but didn't complete.

Then I upgraded that cluster to 12.2.7.  Resharding seems to have finished, (two shards), but "bucket limit check" says there are 300,000 objects, 150k per shard, and gives a "fill_status OVER 100%" message.

But an "s3 ls" shows 100k objects in the bucket. And a "rados ls" shows 200k objects, two per file, one has file data and one is empty.

e.g. for file TEST.89488
$ rados ls -p default.rgw.buckets.index | grep TEST.89488\$
a7fb3a0d-e0a4-401c-b7cb-dbc535f3c1af.114156.2_TEST.89488 (empty)
a7fb3a0d-e0a4-401c-b7cb-dbc535f3c1af.114156.2__:ZuP3m9XRFcarZYrLGTVd8rcOksWkGBr_TEST.89488 (has data)

Both "bucket check" and "bucket check --check-objects" just return []


How should I go about fixing this?  The bucket *seems* functional, and I don't *think* there are extra objects, but the index check thinks there is?  How do I find out what the index actually says?  Or whether there really are extra files that need removing.


Thanks for any ideas or pointers.


Sean

$ /usr/local/bin/aws --endpoint-url http://test-cluster/ --profile test s3 ls s3://test2/ | wc -l
100003

$ sudo rados ls -p default.rgw.buckets.index | grep -c TEST
200133

$ sudo radosgw-admin bucket limit check
[
    {
        "user_id": "test",
        "buckets": [
...
            {
                "bucket": "test2",
                "tenant": "",
                "num_objects": 300360,
                "num_shards": 2,
                "objects_per_shard": 150180,
                "fill_status": "OVER 100.000000%"
            }
        ]
    }
]

$ sudo radosgw-admin reshard status --bucket test2
[
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    },
    {
        "reshard_status": 0,
        "new_bucket_instance_id": "",
        "num_shards": -1
    }
]

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux