Re: Can't start radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Okay, John Wilkins suggested turning off authentication to see if that works...  and it seems to have!

We commented our the three "required" auth lines in global, and radosgw started right up.

However, we don't really want to run without auth, so anyone have any ideas what the issue could be with auth?

Thanks!

Rom



From: Romeo M <romeo_ceph@xxxxxxxxxxxxxx>
To: Andreas Kurz <andreas@xxxxxxxxxxx>; "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Sent: Wednesday, March 27, 2013 10:10 PM
Subject: Re: Can't start radosgw

Hi Andreas,

We're running it on a storage node (it's running 4 OSD processes, but no mon processes).  Here's the config:

Thanks!

Rom


[global]
        auth supported = cephx
        auth cluster required = cephx
        auth service required = cephx
        auth client required = cephx

[osd]

[mon]

[mds]

[client.radosgw.gateway]
        host = storage-node4
        rgw socket path = /tmp/radosgw.sock
        keyring = /etc/ceph/keyring.radosgw.gateway
        log file = /var/log/ceph/radosgw.log

[mon.a]
        host = storage-node1
        mon addr = 10.10.10.100
[osd.0]
        host = storage-node1
        addr = 10.10.10.100
[osd.1]
        host = storage-node1
        addr = 10.10.10.100
[osd.2]
        host = storage-node1
        addr = 10.10.10.100
[osd.3]
        host = storage-node1
        addr = 10.10.10.100

[mon.b]
        host = storage-node2
        mon addr = 10.10.10.101
[osd.4]
        host = storage-node2
        addr = 10.10.10.101
[osd.5]
        host = storage-node2
        addr = 10.10.10.101
[osd.6]
        host = storage-node2
        addr = 10.10.10.101
[osd.7]
        host = storage-node2
        addr = 10.10.10.101

[mon.c]
        host = storage-node3
        mon addr = 10.10.10.102
[osd.8]
        host = storage-node3
        addr = 10.10.10.102
[osd.9]
        host = storage-node3
        addr = 10.10.10.102
[osd.10]
        host = storage-node3
        addr = 10.10.10.102
[osd.11]
        host = storage-node3
        addr = 10.10.10.102

[osd.12]
        host = storage-node4
        addr = 10.10.10.103
[osd.13]
        host = storage-node4
        addr = 10.10.10.103
[osd.14]
        host = storage-node4
        addr = 10.10.10.103
[osd.15]
        host = storage-node4
        addr = 10.10.10.103





From: Andreas Kurz <andreas@xxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, March 27, 2013 2:18 AM
Subject: Re: Can't start radosgw

On 2013-03-26 17:07, Romeo M wrote:
> Hi all,
>
> Anyone have any ideas on this?  It's driving us nuts!

Sharing your current configuration might help. Is the radosgw on an
extra host or together with some other Ceph daemon?

Regards,
Andreas

>
> Thanks!
>
> Rom
>
>
> ------------------------------------------------------------------------
> *From:* Romeo M <romeo_ceph@xxxxxxxxxxxxxx>
> *To:* "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
> *Sent:* Saturday, March 23, 2013 6:58 PM
> *Subject:* Can't start radosgw
>
> Hi all,
>
> I'm having some issues starting radosgw and I  was wondering if anyone
> here might have some tips for us. We setup a Ceph cluster with no issues
> (ceph -s returns HEALTH_OK) but when we try to start radosgw all we get
> in the logs is "Initialization timeout, failed to initialize". We turned
> debugging up (debug ms = 1) and tried to run in the foreground (radosgw
> -d), but no useful logs are produced... We've quadruple checked all the
> config, permissions, keyfiles, etc, but have no idea what's wrong.  In
> the log file (without debug) all we see is this:
>
> 2013-03-23 18:29:18.064508 7fa93ed6f780  0 ceph version 0.56.3
> (6eb7e15a4783b122e9b0c85ea9ba064145958aa5), process radosgw, pid 25755
> 2013-03-23 18:29:48.067629 7fa938a3b700 -1 Initialization timeout,
> failed to initialize
>
> If we turn on debug, we see this:
>
> 2013-03-23 18:30:05.666988 7fda3dd17780  0 ceph version 0.56.3
> (6eb7e15a4783b122e9b0c85ea9ba064145958aa5), process radosgw, pid 25777
> 2013-03-23 18:30:05.692383 7fda3dd17780  1 -- :/0 messenger.start
> 2013-03-23 18:30:05.695468 7fda3dd17780  1 -- :/1025779 -->
> 10.10.10.100:6789/0 -- auth(proto 0 40 bytes epoch 0) v1 -- ?+0
> 0x14b0930 con 0x14b0520
> 2013-03-23 18:30:05.695895 7fda33a81700  1 -- 10.10.10.100:0/1025779
> learned my addr 10.10.10.100:0/1025779
> 2013-03-23 18:30:05.697127 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 1 ==== mon_map v1 ==== 473+0+0 (2926159256 0
> 0) 0x14b5090 con 0x14b0520
> 2013-03-23 18:30:05.697475 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 2 ==== auth_reply(proto 2 0 Success) v1 ====
> 33+0+0 (4028035406 0 0) 0x14b5340 con 0x14b0520
> 2013-03-23 18:30:05.697987 7fda35a85700  1 -- 10.10.10.100:0/1025779 -->
> 10.10.10.100:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> 0x14b5980 con 0x14b0520
> 2013-03-23 18:30:05.699032 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 3 ==== auth_reply(proto 2 0 Success) v1 ====
> 222+0+0 (404397455 0 0) 0x14b5340 con 0x14b0520
> 2013-03-23 18:30:05.699478 7fda35a85700  1 -- 10.10.10.100:0/1025779 -->
> 10.10.10.100:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- ?+0
> 0x14b1b70 con 0x14b0520
> 2013-03-23 18:30:05.700788 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 4 ==== auth_reply(proto 2 0 Success) v1 ====
> 425+0+0 (1505282999 0 0) 0x14b1fe0 con 0x14b0520
> 2013-03-23 18:30:05.701088 7fda35a85700  1 -- 10.10.10.100:0/1025779 -->
> 10.10.10.100:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x14b0d60
> con 0x14b0520
> 2013-03-23 18:30:05.701627 7fda3dd17780  1 -- 10.10.10.100:0/1025779 -->
> 10.10.10.100:6789/0 -- mon_subscribe({monmap=3+,osdmap=0}) v2 -- ?+0
> 0x14b7a40 con 0x14b0520
> 2013-03-23 18:30:05.701791 7fda3dd17780  1 -- 10.10.10.100:0/1025779 -->
> 10.10.10.100:6789/0 -- mon_subscribe({monmap=3+,osdmap=0}) v2 -- ?+0
> 0x14b7fb0 con 0x14b0520
> 2013-03-23 18:30:05.702395 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 5 ==== mon_map v1 ==== 473+0+0 (2926159256 0
> 0) 0x14b7fb0 con 0x14b0520
> 2013-03-23 18:30:05.702648 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (3392334090 0 0) 0x14b83b0 con 0x14b0520
> 2013-03-23 18:30:05.702757 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 7 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (3392334090 0 0) 0x7fda2c000ce0 con 0x14b0520
> 2013-03-23 18:30:05.702814 7fda35a85700  1 -- 10.10.10.100:0/1025779 <==
> mon.0 10.10.10.100:6789/0 8 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (3392334090 0 0) 0x7fda2c000f10 con 0x14b0520
> 2013-03-23 18:30:35.669341 7fda379e3700 -1 Initialization timeout,
> failed to initialize
>
> ceph -s returns the following:
>
> # ceph -s
>    health HEALTH_OK
>    monmap e2: 3 mons at
> {a=10.10.10.100:6789/0,b=10.10.10.101:6789/0,c=10.10.10.102:6789/0},
> election epoch 22, quorum 0,1,2 a,b,c
>    osdmap e268: 16 osds: 16 up, 16 in
>    pgmap v8749: 3304 pgs: 3304 active+clean; 540 bytes data, 82134 MB
> used, 29713 GB / 29793 GB avail
>    mdsmap e1: 0/0/1 up
>
> Any ideas??
>
> Thanks!
>
> Rom
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux