Re: unable to start radosgw after upgrade from 10.2.2 to 10.2.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Orit,

That could well be related - as mentioned, we do have a hammer radosgw still running, and I have also run radosgw-admin on that system while trying to understand what changed between the two releases!

So reading that bug report, it sounds like having the hammer radosgw itself running isn't necessarily an issue, so leaving it running while correcting the master zone for jewel shouldn't cause issues as long as the hammer radosgw-admin doesn't get run.

Of course it shouldn't a big deal to shut the hammer radosgw down before changing anything, but in principle it would be nice if it were safe to do a true rolling upgrade across multiple radosgw instances with no downtime for the users...

Thanks,

Graham

On 10/10/2016 09:29 AM, Orit Wasserman wrote:
Hi Graham,
Is there a chance you have old radosgw-admin (hammer) running?
You may encountered http://tracker.ceph.com/issues/17371
If an hammer radosgw-admin runs on the jewel radosgw it corrupts the
configuration.
We are working on a fix for that.

Orit


On Fri, Oct 7, 2016 at 9:37 PM, Graham Allan <gta@xxxxxxx> wrote:
Dear Orit,

On 10/07/2016 04:21 AM, Orit Wasserman wrote:

Hi,

On Wed, Oct 5, 2016 at 11:23 PM, Andrei Mikhailovsky <andrei@xxxxxxxxxx>
wrote:

Hello everyone,

I've just updated my ceph to version 10.2.3 from 10.2.2 and I am no
longer
able to start the radosgw service. When executing I get the following
error:

2016-10-05 22:14:10.735883 7f1852d26a00  0 ceph version 10.2.3
(ecc23778eb545d8dd55e2e4735b53cc93f92e65b), process radosgw, pid 2711
2016-10-05 22:14:10.765648 7f1852d26a00  0 pidfile_write: ignore empty
--pid-file
2016-10-05 22:14:11.287772 7f1852d26a00  0 zonegroup default missing zone
for master_zone=


This means you are missing a master zone , you can get here only if
you configured a realm.
Is that the case?

Can you provide:
radosgw-admin realm get
radosgw-admin zonegroupmap get
radosgw-admin zonegroup get
radosgw-admin zone get  -rgw-zone=default

Orit


I have not yet modified anything since the jewel upgrade - do you mind if I
post the output for these from our cluster for your opinion? There is
apparently no realm configured (which is what I expect for this cluster),
but it sounds like you think this situation shouldn't arise in that case.

root@cephgw04:~# radosgw-admin realm get
missing realm name or id, or default realm not found
root@cephgw04:~# radosgw-admin realm list
{
    "default_info": "",
    "realms": []
}

root@cephgw04:~# radosgw-admin zonegroupmap get
failed to read current period info: (2) No such file or directory{
    "zonegroups": [],
    "master_zonegroup": "",
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    }
}
2016-10-07 14:33:15.299994 7fecf5cf4900  0 RGWPeriod::init failed to init
realm  id  : (2) No such file or directory
root@cephgw04:~# radosgw-admin zonegroup get
failed to init zonegroup: (2) No such file or directory
root@cephgw04:~# radosgw-admin zonegroup get --rgw-zonegroup=default
{
    "id": "default",
    "name": "default",
    "api_name": "",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "",
    "zones": [
        {
            "id": "default",
            "name": "default",
            "endpoints": [],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 32,
            "read_only": "false"
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": []
        },
        {
            "name": "ec42-placement",
            "tags": []
        }
    ],
    "default_placement": "ec42-placement",
    "realm_id": ""
}

root@cephgw04:~# radosgw-admin zone get --rgw-zone=default
{
    "id": "default",
    "name": "default",
    "domain_root": ".rgw",
    "control_pool": ".rgw.control",
    "gc_pool": ".rgw.gc",
    "log_pool": ".log",
    "intent_log_pool": ".intent-log",
    "usage_log_pool": ".usage",
    "user_keys_pool": ".users",
    "user_email_pool": ".users.email",
    "user_swift_pool": ".users.swift",
    "user_uid_pool": ".users.uid",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [],
    "metadata_heap": ".rgw.meta",
    "realm_id": ""
}


--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux