Re: RadosGW Error : Error updating periodmap, multiple master zonegroups configured

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 06/09/2016 à 11:13, Orit Wasserman a écrit :
> you can try:
> radosgw-admin zonegroup modify --zonegroup-id <zg id> --master=false

I try but I don't have any zonegroup with this ID listed, the zonegroup with this Id appear only in the zonegroup-map.

anyway I can do a zonegroup get --zonegroup-id 4d982760-7853-4174-8c05-cec2ef148cf0

I might try to change the name of this zonegroup ? because I have 2 zone wiht the same name but with 2 different IDs

$ radosgw-admin zonegroup get --zonegroup-id 4d982760-7853-4174-8c05-cec2ef148cf0
{
    "id": "4d982760-7853-4174-8c05-cec2ef148cf0",
    "name": "default",
    "api_name": "",
    "is_master": "false",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "c9724aff-5fa0-4dd9-b494-57bdb48fab4e",
    "zones": [
        {
            "id": "c9724aff-5fa0-4dd9-b494-57bdb48fab4e",
            "name": "default",
            "endpoints": [],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 0,
            "read_only": "false"
        }
    ],
    "placement_targets": [
        {
            "name": "custom-placement",
            "tags": []
        },
        {
            "name": "default-placement",
            "tags": []
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "ccc2e663-66d3-49a6-9e3a-f257785f2d9a"
}

and the default :

$ radosgw-admin zonegroup get --zonegroup-id default
{
    "id": "default",
    "name": "default",
    "api_name": "",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "",
    "zones": [
        {
            "id": "default",
            "name": "default",
            "endpoints": [],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 0,
            "read_only": "false"
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": []
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "ccc2e663-66d3-49a6-9e3a-f257785f2d9a"
}

$ radosgw-admin bucket list
2016-09-06 11:21:04.787391 7fb8a1f0b900  0 Error updating periodmap, multiple master zonegroups configured
2016-09-06 11:21:04.787407 7fb8a1f0b900  0 master zonegroup: 4d982760-7853-4174-8c05-cec2ef148cf0 and  default
2016-09-06 11:21:04.787409 7fb8a1f0b900  0 ERROR: updating period map: (22) Invalid argument
2016-09-06 11:21:04.787424 7fb8a1f0b900  0 failed to add zonegroup to current_period: (22) Invalid argument
2016-09-06 11:21:04.787432 7fb8a1f0b900 -1 failed converting region to zonegroup : ret -22 (22) Invalid argument
couldn't init storage provider


> On Tue, Sep 6, 2016 at 11:08 AM, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
>> Hello Orit,
>>
>>> you have two (or more) zonegroups that are set as master.
>>
>> Yes I know, but I don't know how to fix this
>>
>>> First detect which zonegroup are the problematic
>>> get zonegroup list by running: radosgw-admin zonegroup list
>>
>> I only see one zonegroup :
>>
>> $ radosgw-admin zonegroup list
>> read_default_id : 0
>> {
>>     "default_info": "default",
>>     "zonegroups": [
>>         "default"
>>     ]
>> }
>>
>>> than on each zonegroup run:
>>> radosgw-admin zonegroup get --rgw-zonegroup <zg name>
>>> see in which is_master is true.
>>
>> $ radosgw-admin zonegroup get --rgw-zonegroup default
>> {
>>     "id": "default",
>>     "name": "default",
>>     "api_name": "",
>>     "is_master": "true",
>>     "endpoints": [],
>>     "hostnames": [],
>>     "hostnames_s3website": [],
>>     "master_zone": "",
>>     "zones": [
>>         {
>>             "id": "default",
>>             "name": "default",
>>             "endpoints": [],
>>             "log_meta": "false",
>>             "log_data": "false",
>>             "bucket_index_max_shards": 0,
>>             "read_only": "false"
>>         }
>>     ],
>>     "placement_targets": [
>>         {
>>             "name": "default-placement",
>>             "tags": []
>>         }
>>     ],
>>     "default_placement": "default-placement",
>>     "realm_id": "ccc2e663-66d3-49a6-9e3a-f257785f2d9a"
>> }
>>
>>
>>> Now you need to clear the master flag for all zonegroups except one,
>>> this can be done by running:
>>> radsogw-admin zonegroup modify --rgw-zonegroup <zg name> --master=false
>>
>> if you check in files in my previous mail in metadata_zonegroup-map.json and metadata_zonegroup.json, there is only one zonegroup with name
>> "default" but in metadata_zonegroup.json, the id is "default" and in metadata_zonegroup-map.json it is "4d982760-7853-4174-8c05-cec2ef148cf0"
>>
>> so for the zonegroup with the name "default", I have 2 differents ID, I guess the problem is there
>>
>> Thanks for your help
>>
>> Best regards
>>
>> Yoann Moulin
>>
>>> On Tue, Sep 6, 2016 at 9:22 AM, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
>>>> Dear List,
>>>>
>>>> I have an issue with my radosGW.
>>>>
>>>> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
>>>> Linux cluster002 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>>>> Ubuntu 16.04 LTS
>>>>
>>>>> $ ceph -s
>>>>>     cluster f9dfd27f-c704-4d53-9aa0-4a23d655c7c4
>>>>>      health HEALTH_OK
>>>>>      monmap e1: 3 mons at {cluster002.localdomain=10.90.37.3:6789/0,cluster010.localdomain=10.90.37.11:6789/0,cluster018.localdomain=10.90.37.19:6789/0}
>>>>>             election epoch 40, quorum 0,1,2 cluster002.localdomain,cluster010.localdomain,cluster018.localdomain
>>>>>       fsmap e47: 1/1/1 up {0=cluster006.localdomain=up:active}, 2 up:standby
>>>>>      osdmap e3784: 144 osds: 144 up, 120 in
>>>>>             flags sortbitwise
>>>>>       pgmap v1146863: 7024 pgs, 26 pools, 71470 GB data, 41466 kobjects
>>>>>             209 TB used, 443 TB / 653 TB avail
>>>>>                 7013 active+clean
>>>>>                    7 active+clean+scrubbing+deep
>>>>>                    4 active+clean+scrubbing
>>>>
>>>> Example of the error message I have :
>>>>
>>>>> $ radosgw-admin bucket list
>>>>> 2016-09-06 09:04:14.810198 7fcbb01d5900  0 Error updating periodmap, multiple master zonegroups configured
>>>>> 2016-09-06 09:04:14.810213 7fcbb01d5900  0 master zonegroup: 4d982760-7853-4174-8c05-cec2ef148cf0 and  default
>>>>> 2016-09-06 09:04:14.810215 7fcbb01d5900  0 ERROR: updating period map: (22) Invalid argument
>>>>> 2016-09-06 09:04:14.810230 7fcbb01d5900  0 failed to add zonegroup to current_period: (22) Invalid argument
>>>>> 2016-09-06 09:04:14.810238 7fcbb01d5900 -1 failed converting region to zonegroup : ret -22 (22) Invalid argument
>>>>
>>>> in attachment, you have the result of those commands :
>>>>
>>>>> $ radosgw-admin metadata zonegroup-map get > metadata_zonegroup-map.json
>>>>> $ radosgw-admin metadata zonegroup get > metadata_zonegroup.json
>>>>> $ radosgw-admin metadata zone get > metadata_zone.json
>>>>> $ radosgw-admin metadata region-map get > metadata_region-map.json
>>>>> $ radosgw-admin metadata region get >  metadata_region.json
>>>>> $ radosgw-admin zonegroup-map get > zonegroup-map.json
>>>>> $ radosgw-admin zonegroup get > zonegroup.json
>>>>> $ radosgw-admin zone get > zone.json
>>>>> $ radosgw-admin region-map get > region-map.json
>>>>> $ radosgw-admin region get > region.json
>>>>> $ radosgw-admin period get > period.json
>>>>> $ radosgw-admin period list > period_list.json
>>>>
>>>> I have 60TB of data in this RadosGW, can I fix this issue without having to repupload all those data ?
>>>>
>>>> Thanks for you help !
>>>>
>>>> Best regards
>>>>
>>>> --
>>>> Yoann Moulin
>>>> EPFL IC-IT
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>
>>
>> --
>> Yoann Moulin
>> EPFL IC-IT


-- 
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux