Re: RadosGW Error : Error updating periodmap, multiple master zonegroups configured

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yoann,
you have two (or more) zonegroups that are set as master.
First detect which zonegroup are the problematic
get zonegroup list by running: radosgw-admin zonegroup list
than on each zonegroup run:
radosgw-admin zonegroup get --rgw-zonegroup <zg name>
see in which is_master is true.

Now you need to clear the master flag for all zonegroups except one,
this can be done by running:
radsogw-admin zonegroup modify --rgw-zonegroup <zg name> --master=false

Orit

On Tue, Sep 6, 2016 at 9:22 AM, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
> Dear List,
>
> I have an issue with my radosGW.
>
> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> Linux cluster002 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> Ubuntu 16.04 LTS
>
>> $ ceph -s
>>     cluster f9dfd27f-c704-4d53-9aa0-4a23d655c7c4
>>      health HEALTH_OK
>>      monmap e1: 3 mons at {cluster002.localdomain=10.90.37.3:6789/0,cluster010.localdomain=10.90.37.11:6789/0,cluster018.localdomain=10.90.37.19:6789/0}
>>             election epoch 40, quorum 0,1,2 cluster002.localdomain,cluster010.localdomain,cluster018.localdomain
>>       fsmap e47: 1/1/1 up {0=cluster006.localdomain=up:active}, 2 up:standby
>>      osdmap e3784: 144 osds: 144 up, 120 in
>>             flags sortbitwise
>>       pgmap v1146863: 7024 pgs, 26 pools, 71470 GB data, 41466 kobjects
>>             209 TB used, 443 TB / 653 TB avail
>>                 7013 active+clean
>>                    7 active+clean+scrubbing+deep
>>                    4 active+clean+scrubbing
>
> Example of the error message I have :
>
>> $ radosgw-admin bucket list
>> 2016-09-06 09:04:14.810198 7fcbb01d5900  0 Error updating periodmap, multiple master zonegroups configured
>> 2016-09-06 09:04:14.810213 7fcbb01d5900  0 master zonegroup: 4d982760-7853-4174-8c05-cec2ef148cf0 and  default
>> 2016-09-06 09:04:14.810215 7fcbb01d5900  0 ERROR: updating period map: (22) Invalid argument
>> 2016-09-06 09:04:14.810230 7fcbb01d5900  0 failed to add zonegroup to current_period: (22) Invalid argument
>> 2016-09-06 09:04:14.810238 7fcbb01d5900 -1 failed converting region to zonegroup : ret -22 (22) Invalid argument
>
> in attachment, you have the result of those commands :
>
>> $ radosgw-admin metadata zonegroup-map get > metadata_zonegroup-map.json
>> $ radosgw-admin metadata zonegroup get > metadata_zonegroup.json
>> $ radosgw-admin metadata zone get > metadata_zone.json
>> $ radosgw-admin metadata region-map get > metadata_region-map.json
>> $ radosgw-admin metadata region get >  metadata_region.json
>> $ radosgw-admin zonegroup-map get > zonegroup-map.json
>> $ radosgw-admin zonegroup get > zonegroup.json
>> $ radosgw-admin zone get > zone.json
>> $ radosgw-admin region-map get > region-map.json
>> $ radosgw-admin region get > region.json
>> $ radosgw-admin period get > period.json
>> $ radosgw-admin period list > period_list.json
>
> I have 60TB of data in this RadosGW, can I fix this issue without having to repupload all those data ?
>
> Thanks for you help !
>
> Best regards
>
> --
> Yoann Moulin
> EPFL IC-IT
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux