Re: keyring generation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 2, 2014 at 12:18 AM, Kei.masumoto <kei.masumoto@xxxxxxxxx> wrote:
> Hi,
>
> I am newbie of ceph, now I am trying to deploy following
> "http://ceph.com/docs/master/start/quick-ceph-deploy/";
> ceph1, ceph2 and ceph3 exists according to the above tutorial. I got a
> WARNING message when I exec ceph-deploy "mon create-initial".
>
> 2014-02-01 14:06:37,385 [ceph_deploy.gatherkeys][WARNING] Unable to find
> /etc/ceph/ceph.client.admin.keyring on ['ceph1']
> 2014-02-01 14:06:37,516 [ceph_deploy.gatherkeys][WARNING] Unable to find
> /var/lib/ceph/bootstrap-osd/ceph.keyring on ['ceph1']
> 2014-02-01 14:06:37,639 [ceph_deploy.gatherkeys][WARNING] Unable to find
> /var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph1']
>
> Thinking about when those 3 keyrings should be created, I thins
> "ceph-deploy mon create " is a right timing for keyring creation. I
> checked my environment, and found
> /etc/ceph/ceph.client.admin.keyring.14081.tmp. It looks like this file
> is created by ceph-create-keys on executing stop ceph-all && start
> ceph-all. but ceph-create-keys never finishes.

ceph-deploy tries to help here a lot with create-initial, and although
the warnings are useful,
they are only good depending on the context of the rest of the output.

When the whole process completes, does ceph-deploy say all mons are up
and running?

It would be better to paste the complete output of the call so we can
see the details.
>
> When I execute ceph-create-keys manually, it continues to generate below
> log, looks like waiting reply...
>
> 2014-02-01 20:13:02.847737 7f55e81a4700  0 -- :/1001774 >>
> 192.168.11.8:6789/0 pipe(0x7f55e4024400 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7f55e4024660).fault
>
> Since I found that mon listens 6789, so I strace mon, then mon also
> waiting something...
>
> root@ceph1:~/src/ceph-0.56.7# strace -p 1047
> Process 1047 attached - interrupt to quit
> futex(0x7f37c14839d0, FUTEX_WAIT, 1102, NULL
>
> I have no idea what situation should be, any hints?
>
> P.S. somebody give me an adivce to check below, but I dont see any from
> here.
> root@ceph1:~/my-cluster# ceph daemon mon.`hostname` mon_status
> { "name": "ceph1",
>   "rank": 0,
>   "state": "leader",
>   "election_epoch": 1,
>   "quorum": [
>         0],
>   "outside_quorum": [],
>   "extra_probe_peers": [],
>   "sync_provider": [],
>   "monmap": { "epoch": 1,
>       "fsid": "26835656-6b29-455d-9d1f-545cad8f1e23",
>       "modified": "0.000000",
>       "created": "0.000000",
>       "mons": [
>             { "rank": 0,
>               "name": "ceph1",
>               "addr": "192.168.111.11:6789\/0"}]}}
>
>
> Kei
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux