Re: "Lost" buckets on radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 21, 2016 at 2:42 PM, Graham Allan <gta@xxxxxxx> wrote:
> Following up to this (same problem, looking at it with Jeff)...
>
> There was definite confusion with the zone/zonegroup/realm/period changes
> during the hammer->jewel upgrade. It's possible that our placement settings
> were misplaced at this time.
>
> However what I find puzzling is that different buckets from the same pool
> seem affected - if this were placement related, I'd rather expect all
> buckets from one pool to be affected, those in another not. Am I
> interpreting this wrongly?
>
> For example here is one bucket which remains accessible:
>
>> # radosgw-admin metadata get bucket.instance:gta:default.691974.1
>> {
>>     "key": "bucket.instance:gta:default.691974.1",
>>     "ver": {
>>         "tag": "_3Z9nfFjZn97aV2YJ4nFhVuk",
>>         "ver": 85
>>     },
>>     "mtime": "2016-11-11 16:48:02.950760Z",
>>     "data": {
>>         "bucket_info": {
>>             "bucket": {
>>                 "name": "gta",
>>                 "pool": ".rgw.buckets.ec42",
>>                 "data_extra_pool": ".rgw.buckets.extra",
>>                 "index_pool": ".rgw.buckets.index",
>>                 "marker": "default.691974.1",
>>                 "bucket_id": "default.691974.1",
>>                 "tenant": ""
>>             },
>>             "creation_time": "2015-11-13 20:05:26.000000Z",
>>             "owner": "gta",
>>             "flags": 0,
>>             "zonegroup": "default",
>>             "placement_rule": "ec42-placement",
>>             "has_instance_obj": "true",
>>             "quota": {
>>                 "enabled": false,
>>                 "max_size_kb": -1,
>>                 "max_objects": -1
>>             },
>>             "num_shards": 32,
>>             "bi_shard_hash_type": 0,
>>             "requester_pays": "false",
>>             "has_website": "false",
>>             "swift_versioning": "false",
>>             "swift_ver_location": ""
>>         },
>>         "attrs": [
>>             {
>>                 "key": "user.rgw.acl",
>>                 "val":
>> "AgJ\/AAAAAgIXAAAAAwAAAGd0YQwAAABHcmFoYW0gQWxsYW4DA1wAAAABAQAAAAMAAABndGEPAAAAAQAAAAMAAABndGEDAzcAAAACAgQAAAAAAAAAAwAAAGd0YQAAAAAAAAAAAgIEAAAADwAAAAwAAABHcmFoYW0gQWxsYW4AAAAAAAAAAA=="
>>             },
>>             {
>>                 "key": "user.rgw.idtag",
>>                 "val": ""
>>             },
>>             {
>>                 "key": "user.rgw.manifest",
>>                 "val": ""
>>             }
>>         ]
>>     }
>> }
>
>
> while here is another, located in the same pool, which is not accessible:
>
>> # radosgw-admin metadata get bucket.instance:tcga:default.712449.19
>> {
>>     "key": "bucket.instance:tcga:default.712449.19",
>>     "ver": {
>>         "tag": "_vm0Og31XbhhtmnuQVZ6cYJP",
>>         "ver": 2010
>>     },
>>     "mtime": "2016-11-19 03:49:03.406938Z",
>>     "data": {
>>         "bucket_info": {
>>             "bucket": {
>>                 "name": "tcga",
>>                 "pool": ".rgw.buckets.ec42",
>>                 "data_extra_pool": ".rgw.buckets.extra",
>>                 "index_pool": ".rgw.buckets.index",
>>                 "marker": "default.712449.19",
>>                 "bucket_id": "default.712449.19",
>>                 "tenant": ""
>>             },
>>             "creation_time": "2016-01-21 20:51:21.000000Z",
>>             "owner": "jmcdonal",
>>             "flags": 0,
>>             "zonegroup": "default",
>>             "placement_rule": "ec42-placement",
>>             "has_instance_obj": "true",
>>             "quota": {
>>                 "enabled": false,
>>                 "max_size_kb": -1,
>>                 "max_objects": -1
>>             },
>>             "num_shards": 0,
>>             "bi_shard_hash_type": 0,
>>             "requester_pays": "false",
>>             "has_website": "false",
>>             "swift_versioning": "false",
>>             "swift_ver_location": ""
>>         },
>>         "attrs": [
>>             {
>>                 "key": "user.rgw.acl",
>>                 "val":
>> "AgKbAAAAAgIgAAAACAAAAGptY2RvbmFsEAAAAEplZmZyZXkgTWNEb25hbGQDA28AAAABAQAAAAgAAABqbWNkb25hbA8AAAABAAAACAAAAGptY2RvbmFsAwNAAAAAAgIEAAAAAAAAAAgAAABqbWNkb25hbAAAAAAAAAAAAgIEAAAADwAAABAAAABKZWZmcmV5IE1jRG9uYWxkAAAAAAAAAAA="
>>             },
>>             {
>>                 "key": "user.rgw.idtag",
>>                 "val": ""
>>             },
>>             {
>>                 "key": "user.rgw.manifest",
>>                 "val": ""
>>             }
>>         ]
>>     }
>> }
>
>
> if I do "ls --pool .rgw.buckets.ec42|grep default.712449.19" I can see
> objects with the above bucket ID, and fetch them, so I know the data is
> there...
>
> Does this seem like a placement_pool issue, or maybe some other unrelated
> issue?
>

Could be another semi-related issue. Can you provide output of the
commands that fail with 'debug rgw = 20' and 'debug ms = 1'?

Thanks,
Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux