Lost buckets when moving OSD location

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I'm using a v16.2.13 Ceph cluster. Yesterday, I just add some SSD node for replace HDD node. During the process, 1 SSD node have different MTU that cause the some PGs become unactive for a while. But after change the MTU, all the PGs is active+clean now. But after that, I can't access some buckets despite their metadata is still there:

$ radosgw-admin bucket stats --bucket new-bucket
failure: (2002) Unknown error 2002:

$ radosgw-admin bucket list --uid owner_id
[
    ...,
    "new-bucket",
    "tesst"
]

$ radosgw-admin metadata get bucket.instance:new-bucket:699abe5c-9603-4718-9073-938888ed8bbb.4961564.2
{
    "key": "bucket.instance:new-bucket:699abe5c-9603-4718-9073-938888ed8bbb.4961564.2",
    "ver": {
        "tag": "_ppS69qh0QYSDPuH1I-kp51a",
        "ver": 32
    },
    "mtime": "2023-04-03T01:54:24.445009Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "new-bucket",
                "marker": "699abe5c-9603-4718-9073-938888ed8bbb.4961564.2",
                "bucket_id": "699abe5c-9603-4718-9073-938888ed8bbb.4961564.2",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2022-03-15T08:14:13.199771Z",
            "owner": "0e0481b66c814e0c9d34b5bee8779205",
            "flags": 6,
            "zonegroup": "a453e527-0ded-4e8f-a51e-7eff2293b056",
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 0,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": ""
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgJNAQAAAwI+AAAAIAAAADBlMDQ4MWI2NmM4MTRlMGM5ZDM0YjViZWU4Nzc5MjA1FgAAAGFuaHZ0MTIxMjEyQHZjY2xvdWQudm4EAwMBAAABAQAAACAAAAAwZTA0ODFiNjZjODE0ZTBjOWQzNGI1YmVlODc3OTIwNQ8AAAACAAAAAAAAAAUDLAAAAAICBAAAAAIAAAAAAAAAAAAAAAAAAAACAgQAAAABAAAAAAAAAAEAAAAAAAAAIAAAADBlMDQ4MWI2NmM4MTRlMGM5ZDM0YjViZWU4Nzc5MjA1BQNiAAAAAgIEAAAAAAAAACAAAAAwZTA0ODFiNjZjODE0ZTBjOWQzNGI1YmVlODc3OTIwNQAAAAAAAAAAAgIEAAAADwAAABYAAABhbmh2dDEyMTIxMkB2Y2Nsb3VkLnZuAAAAAAAAAAABAAAAAQAAAAEAAAAAAAAA"
            },
            {
                "key": "user.rgw.iam-policy",
                "val": "eyJWZXJzaW9uIjogIjIwMTItMTAtMTciLCAiU3RhdGVtZW50IjogW3siRWZmZWN0IjogIkFsbG93IiwgIlByaW5jaXBhbCI6IHsiQVdTIjogWyJhcm46YXdzOmlhbTo6OnVzZXIvdW5kZWZpbmVkOmh1eW5ucC1jcmVhdGUtc3VidXNlci1mcm9tLXJnd2FkbWluIl19LCAiQWN0aW9uIjogWyJzMzpDcmVhdGVCdWNrZXQiLCAiczM6RGVsZXRlQnVja2V0Il0sICJSZXNvdXJjZSI6IFsiYXJuOmF3czpzMzo6Om5ldy1idWNrZXQiLCAiYXJuOmF3czpzMzo6Om5ldy1idWNrZXQvKiJdfV19"
            }
        ]
    }
}

Do any of you have any ideas on how to get these buckets back?
I really appreciate it. Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux