Yes! that worked :-)
now I changed the master_zone to default like so:
{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [
"***REDACTED***",
"***REDACTED***",
"***REDACTED***"
],
"hostnames_s3website": [],
"master_zone": "default",
"zones": [
{
"id": "default",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": ""
}
and
radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
gives me
failed to init realm: (2) No such file or directory
--
anamica GmbH
Heppacher Str. 39
71404 Korb
Telefon:
+49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.enderle@xxxxxxxxxx
Internet: www.anamica.de
Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle
does adding --rgw-zonegroup=default helps?
On Tue, Jul 26, 2016 at 11:09 AM, Frank Enderle
<frank.enderle@xxxxxxxxxx> wrote:
> I get this error when I try to execute the command:
>
> radosgw-admin --cluster=pbs zonegroup get
> failed to init zonegroup: (2) No such file or directory
>
> also with
>
> radosgw-admin --cluster=pbs zonegroup get --rgw-zone=default
> failed to init zonegroup: (2) No such file or directory
>
>
> --
>
> anamica GmbH
> Heppacher Str. 39
> 71404 Korb
>
> Telefon: +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.enderle@xxxxxxxxxx
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman <owasserm@xxxxxxxxxx>
> Date: 26 July 2016 at 09:55:58
> To: Frank Enderle <frank.enderle@xxxxxxxxxx>
> Cc: Shilpa Manjarabad Jagannath <smanjara@xxxxxxxxxx>,
> ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
>
> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>
> you need to set the default zone as master zone.
> you can try:
> radosgw-admin zonegroup set < zg.json
> where the json is the json return from radosgw-admin zonegroup get
> with master_zone field set to "default"
>
> Orit
>
> On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
> <frank.enderle@xxxxxxxxxx> wrote:
>> It most certainly looks very much like the same problem.. Is there a way
>> to
>> patch the configuration by hand to get the cluster back in a working
>> state?
>>
>> --
>>
>> From: Shilpa Manjarabad Jagannath <smanjara@xxxxxxxxxx>
>> Date: 25 July 2016 at 10:34:42
>> To: Frank Enderle <frank.enderle@xxxxxxxxxx>
>> Cc: ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
>> Subject: Re: [ceph-users] Problem with RGW after update to Jewel
>>
>>
>> ----- Original Message -----
>>> From: "Frank Enderle" <frank.enderle@xxxxxxxxxx>
>>> To: ceph-users@xxxxxxxxxxxxxx
>>> Sent: Monday, July 25, 2016 1:28:10 AM
>>> Subject: [ceph-users] Problem with RGW after update to Jewel
>>>
>>> Hi,
>>>
>>> a while ago I updated a cluster from Infernalis to Jewel. After the
>>> update
>>> some problems occured, which I fixed (I had to create some additional
>>> pool
>>> which I was helped with in the IRC channel) - so the cluster now ran fine
>>> until we tried to add an addtional bucket. Now I get the following error
>>> in
>>> the error log:
>>>
>>> 2016-07-24 19:50:45.978005 7f6ce97fa700 1 ====== starting new request
>>> req=0x7f6ce97f4710 =====
>>> 2016-07-24 19:50:46.021122 7f6ce97fa700 0 sending create_bucket request
>>> to
>>> master zonegroup
>>> 2016-07-24 19:50:46.021135 7f6ce97fa700 0 ERROR: endpoints not configured
>>> for
>>> upstream zone
>>> 2016-07-24 19:50:46.021148 7f6ce97fa700 0 WARNING: set_req_state_err
>>> err_no=5
>>> resorting to 500
>>> 2016-07-24 19:50:46.021249 7f6ce97fa700 1 ====== req done
>>> req=0x7f6ce97f4710
>>> op status=-5 http_status=500 ======
>>> 2016-07-24 19:50:46.021304 7f6ce97fa700 1 civetweb: 0x7f6dac001420:
>>> 10.42.20.5 - - [24/Jul/2016: 19:50:45 +0000 ] "PUT /abc/ HTTP/1.1" 500 0
>>> -
>>> Cyberduck/4.7.3.18402 (Mac OS X/10.11.6) (x86_64)
>>>
>>> I already tried to fix the problem using the script at
>>>
>>> https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg28620.html
>>>
>>> with the outcome that all users disappeared and no bucket could be
>>> access.
>>> So
>>> I restored the backup .rgw.root and it now works again, but still I can't
>>> create buckets. Obviously something has been mixed up with the
>>> zone/zonegroup stuff during the update.
>>>
>>> Would be somebody able to take a look at this? I'm happy to provide all
>>> the
>>> required files; just name them.
>>>
>>> Thanks,
>>>
>>> Frank
>>>
>>
>> It looks like http://tracker.ceph.com/issues/16627, pending backport.
>>
>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
|