Re: keyring generation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(2014/02/16 3:06), Kei.masumoto wrote:
(2014/02/11 23:02), Alfredo Deza wrote:
On Tue, Feb 11, 2014 at 7:57 AM, Kei.masumoto <kei.masumoto@xxxxxxxxx> wrote:
(2014/02/10 23:33), Alfredo Deza wrote:
On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto <kei.masumoto@xxxxxxxxx>
wrote:
(2014/02/05 23:49), Alfredo Deza wrote:
On Mon, Feb 3, 2014 at 11:28 AM, Kei.masumoto <kei.masumoto@xxxxxxxxx>
wrote:
Hi Alfredo,

Thanks for your reply!

I think I pasted all logs from ceph.log,  but anyway, I re-executed
"ceph-deploy mon create-initial again"
Does that make sense? It seems like stack strace are added...
Those seem bad enough. There is a ticket open for these type of
tracebacks that should be gone with the
up coming release of ceph-deploy.

Your monitor does seem like in a good state. Have you checked the
monitor logs to see if they are complaining
about something?

I would also raise the log level in ceph.conf for the monitors
specifically to:

     debug mon = 10

Thanks for your reply.
I did "debug mon = 10", but I could not find any error logs in
/var/log/ceph/ceph-mon.ceph1.log.
So I tried to let ceph-create-keys generate logs to files, and inserted
log
by myself for debugging purpose.
Then I found get_key() in ceph-create-keys complains like below. ( first
line is inserted by me)

INFO:ceph-create-keys: ceph --cluster=ceph --name=mon.
--keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create
client.admin mon allow * osd allow * mds allow
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Cannot get or create admin key, permission denied
How did you started ceph-create-keys? with root? or with the ceph user?

That process is usually fired up by the init script that is usually
called by root.
I just use "ceph-deploy --overwrite-conf mon create-initial" by user ceph.
Before doing that, "start ceph-all && stop ceph-all" by root.
That is odd, if this is a new cluster why are you starting and stopping ?

It seems that you are at a point where you have tried a few things and
the cluster
setup might not be in a good state.

Can you try setting it up from scratch and make sure you keep logs and
output? If you can
replicate your issues consistently (I have tried and cannot) then it
might indicate an issue
and all the logs and how you got there would be super useful

I tried from scratch and logs are attached.
Currently, 4 hosts exists in my test environment, ceph5(remote-host), ceph4(mon), ceph3(osd), ceph2(osd). I try to follow the instruction, although hostname is little different. <http://ceph.com/docs/master/start/quick-start-preflight/#ceph-node-setup> Please see at the end of console@xxxxxxxxx. After exec "ceph-deploy mon create-initial", I got same error.
Although I will check little more detail, I appreciate any hints.

I understand my problem. According to the instruction below,
<http://ceph.com/docs/master/start/quick-start-preflight/#ceph-node-setup>
When I write "public network" in ceph.conf, "mon_host" has to included the subnet described "public network".
I didnt realize such a pre-condition, have to learn more.

After I changed /usr/bin/ceph like below,
conf_defaults = {
-    'log_to_stderr':'true',
-    'err_to_stderr':'true',
+   'log_to_syslog':'true',
    'log_flush_on_exit':'true',
}

I found a logs in /var/log/syslog.
2014-02-15 23:31:50.417381 7f22f8626700 0 -- :/1009957 >> 192.168.40.136:6789/0 pipe(0x7f22e8019850 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f22e8000c00).faul

Listened ipaddr is different what netstat shows.

Thanks for your help so far.








BTW, I inserted debugging logs to /usr/bin/ceph, and found the below log.

INFO:debug:Exception error calling connect
INFO:debug:Exception error calling connect TimedOut
Those logs are generated by cluster_handle.connect().  e.g.
rados.Rados.connect().

    try:
        if childargs and childargs[0] == 'ping':
            return ping_monitor(cluster_handle, childargs[1])
        cluster_handle.connect(timeout=timeout)
Any hints where to check? 6789 is listened by mon.



So I tried,
root@ceph1:~/my-cluster# chown -R ceph:ceph /var/lib/ceph/mon/
root@ceph1:~/my-cluster# start ceph-all && stop ceph-all
ceph-all start/running
ceph-all stop/waiting
Then I re-tried:
ceph@ceph1:~/my-cluster$ ceph-deploy --overwrite-conf mon create-initial
After that, I found some files are still owned b root. Is this a correct
behavior?
root@ceph1:~/my-cluster# ls -l /var/lib/ceph/mon/ceph-ceph1/store.db
total 1184
-rw-r--r-- 1 ceph ceph 1081168 Feb  8 02:25 000133.sst
-rw-r--r-- 1 ceph ceph   25530 Feb  8 02:38 000135.sst
-rw-r--r-- 1 ceph ceph   25530 Feb  8 02:38 000138.sst
-rw-r--r-- 1 root root   25530 Feb  8 02:44 000141.sst
-rw-r--r-- 1 root root   65536 Feb  8 02:44 000142.log
-rw-r--r-- 1 root root      16 Feb  8 02:44 CURRENT
-rw-r--r-- 1 ceph ceph       0 Jan 26 05:50 LOCK
-rw-r--r-- 1 ceph ceph     315 Jan 26 06:28 LOG
-rw-r--r-- 1 ceph ceph      57 Jan 26 05:50 LOG.old
-rw-r--r-- 1 root root   65536 Feb  8 02:44 MANIFEST-000140
Also, looks like ceph-create-keys and ceph-mon are launched by root. Is
this
correct?
Yes these processes are launched by root.

root@ceph1:~/my-cluster# ps -ef | grep ceph
root     18943     1  0 03:11 ?        00:00:00 /bin/sh -e -c
/usr/bin/ceph-mon --cluster="${cluster:-ceph}" -i "$id" -f /bin/sh
root     18944 18943  0 03:11 ?        00:00:00 /usr/bin/ceph-mon
--cluster=ceph -i ceph1 -f
root     18945     1  0 03:11 ?        00:00:00 /bin/sh -e -c
/usr/sbin/ceph-create-keys --cluster="${cluster:-ceph}" -i
"${id:-$(hostname)}" /bin/sh
root     18946 18945  0 03:11 ?        00:00:00 /usr/bin/python
/usr/sbin/ceph-create-keys --cluster=ceph -i ceph1
root     18960 18946  1 03:11 ?        00:00:00 /usr/bin/python
/usr/bin/ceph --cluster=ceph --name=mon.
--keyring=/var/lib/ceph/mon/ceph-ceph1/keyring auth get-or-create
client.admin mon allow * osd allow * mds allow
root     19036 16175  0 03:11 pts/7    00:00:00 tail -f
/var/log/ceph/ceph-mon.ceph1.log




--------------------------------------------------------------------------------------------------------------------------

[ceph_deploy.cli][INFO  ] Invoked (1.3.4): /usr/bin/ceph-deploy mon
create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 13.04 raring
[ceph1][DEBUG ] determining if provided host has same hostname in
remote
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] deploying mon to ceph1
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] remote hostname: ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][DEBUG ] create the mon path if it does not exist
[ceph1][DEBUG ] checking for done path:
/var/lib/ceph/mon/ceph-ceph1/done
[ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph1][DEBUG ] create the init path if it does not exist
[ceph1][DEBUG ] locating the `service` executable...
[ceph1][INFO  ] Running command: sudo initctl emit ceph-mon
cluster=ceph
id=ceph1
[ceph1][INFO  ] Running command: sudo ceph --cluster=ceph
--admin-daemon
/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][DEBUG ]


********************************************************************************

[ceph1][DEBUG ] status for monitor: mon.ceph1
[ceph1][DEBUG ] {
[ceph1][DEBUG ]   "election_epoch": 1,
[ceph1][DEBUG ]   "extra_probe_peers": [],
[ceph1][DEBUG ]   "monmap": {

[ceph1][DEBUG ]     "created": "0.000000",
[ceph1][DEBUG ]     "epoch": 1,

[ceph1][DEBUG ]     "fsid": "26835656-6b29-455d-9d1f-545cad8f1e23",
[ceph1][DEBUG ]     "modified": "0.000000",
[ceph1][DEBUG ]     "mons": [
[ceph1][DEBUG ]       {

[ceph1][DEBUG ]         "addr": "192.168.111.11:6789/0",
[ceph1][DEBUG ]         "name": "ceph1",
[ceph1][DEBUG ]         "rank": 0
[ceph1][DEBUG ]       }
[ceph1][DEBUG ]     ]
[ceph1][DEBUG ]   },

[ceph1][DEBUG ]   "name": "ceph1",
[ceph1][DEBUG ]   "outside_quorum": [],
[ceph1][DEBUG ]   "quorum": [
[ceph1][DEBUG ]     0
[ceph1][DEBUG ]   ],
[ceph1][DEBUG ]   "rank": 0,

[ceph1][DEBUG ]   "state": "leader",
[ceph1][DEBUG ]   "sync_provider": []
[ceph1][DEBUG ] }
[ceph1][DEBUG ]


********************************************************************************

[ceph1][INFO  ] monitor: mon.ceph1 is running
[ceph1][INFO  ] Running command: sudo ceph --cluster=ceph
--admin-daemon
/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][INFO  ] Running command: sudo ceph --cluster=ceph
--admin-daemon
/var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have
formed
quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
gatherkeys.fetch_file    Namespace(cluster='ceph', dry_run=False,
func=<function mon at 0xe14e60>, mon=['ceph1'], overwrite_conf=False,
prog='ceph-deploy', quiet=False, subcommand='create-initial',
username=None,
verbose=False) :: /etc/ceph/ceph.client.admin.keyring ::
ceph.client.admin.keyring :: ['ceph1']

[ceph_deploy.gatherkeys][DEBUG ] Checking ceph1 for
/etc/ceph/ceph.client.admin.keyring
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find
/etc/ceph/ceph.client.admin.keyring on ['ceph1']
Traceback (most recent call last):
     File "<string>", line 1, in <module>
     File "<string>", line 6, in <module>
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 1220, in serve
gatherkeys.fetch_file Namespace(cluster='ceph', dry_run=False, func=<function mon at 0xe14e60>, mon=['ceph1'], overwrite_conf=False,
prog='ceph-deploy', quiet=False, subcommand='create-initial',
username=None,
verbose=False) :: /var/lib/ceph/mon/ceph-{hostname}/keyring ::
ceph.mon.keyring :: ['ceph1']
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
gatherkeys.fetch_file    Namespace(cluster='ceph', dry_run=False,
func=<function mon at 0xe14e60>, mon=['ceph1'], overwrite_conf=False,
prog='ceph-deploy', quiet=False, subcommand='create-initial',
username=None,
verbose=False) :: /var/lib/ceph/bootstrap-osd/ceph.keyring ::
ceph.bootstrap-osd.keyring :: ['ceph1']
SlaveGateway(io=io, id=id, _startcount=2).serve()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 764, in serve

       [ceph_deploy.gatherkeys][DEBUG ] Checking ceph1 for
/var/lib/ceph/bootstrap-osd/ceph.keyring
self._io.close_write()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 105, in close_write
       self.outfile.close()
IOError: close() called during concurrent operation on the same file
object.

[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find
/var/lib/ceph/bootstrap-osd/ceph.keyring on ['ceph1']
Traceback (most recent call last):
     File "<string>", line 1, in <module>
     File "<string>", line 6, in <module>
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 1220, in serve
       SlaveGateway(io=io, id=id, _startcount=2).serve()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 764, in serve
       self._io.close_write()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 105, in close_write
gatherkeys.fetch_file Namespace(cluster='ceph', dry_run=False, func=<function mon at 0xe14e60>, mon=['ceph1'], overwrite_conf=False,
prog='ceph-deploy', quiet=False, subcommand='create-initial',
username=None,
verbose=False) :: /var/lib/ceph/bootstrap-mds/ceph.keyring ::
ceph.bootstrap-mds.keyring :: ['ceph1']
self.outfile.close()
IOError: close() called during concurrent operation on the same file
object.

[ceph_deploy.gatherkeys][DEBUG ] Checking ceph1 for
/var/lib/ceph/bootstrap-mds/ceph.keyring
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find
/var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph1']
Traceback (most recent call last):
     File "<string>", line 1, in <module>
     File "<string>", line 6, in <module>
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 1220, in serve
       SlaveGateway(io=io, id=id, _startcount=2).serve()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 764, in serve
       self._io.close_write()
     File


"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
line 105, in close_write
       self.outfile.close()
IOError: close() called during concurrent operation on the same file
object.



----------------------------------------------------------------------------------------------------------









(2014/02/04 1:12), Alfredo Deza wrote:
On Mon, Feb 3, 2014 at 10:07 AM, Kei.masumoto <kei.masumoto@xxxxxxxxx>
wrote:
Hi Alfredo,

Thanks for reply!  I pasted the logs below.




------------------------------------------------------------------------------------ 2014-02-01 14:06:33,350 [ceph_deploy.cli][INFO ] Invoked (1.3.4):
/usr/bin/ceph-deploy mon create-initial
2014-02-01 14:06:33,353 [ceph_deploy.mon][DEBUG ] Deploying mon,
cluster
ceph hosts ceph1
2014-02-01 14:06:33,354 [ceph_deploy.mon][DEBUG ] detecting platform
for
host ceph1 ...
2014-02-01 14:06:33,770 [ceph1][DEBUG ] connected to host: ceph1
2014-02-01 14:06:33,775 [ceph1][DEBUG ] detect platform information
from
remote host
2014-02-01 14:06:33,874 [ceph1][DEBUG ] detect machine type
2014-02-01 14:06:33,909 [ceph_deploy.mon][INFO ] distro info: Ubuntu
13.04
raring
2014-02-01 14:06:33,910 [ceph1][DEBUG ] determining if provided host
has
same hostname in remote
2014-02-01 14:06:33,911 [ceph1][DEBUG ] get remote short hostname
2014-02-01 14:06:33,914 [ceph1][DEBUG ] deploying mon to ceph1
2014-02-01 14:06:33,915 [ceph1][DEBUG ] get remote short hostname
2014-02-01 14:06:33,917 [ceph1][DEBUG ] remote hostname: ceph1
2014-02-01 14:06:33,919 [ceph1][DEBUG ] write cluster configuration
to
/etc/ceph/{cluster}.conf
2014-02-01 14:06:33,933 [ceph1][DEBUG ] create the mon path if it
does
not
exist
2014-02-01 14:06:33,939 [ceph1][DEBUG ] checking for done path:
/var/lib/ceph/mon/ceph-ceph1/done
2014-02-01 14:06:33,941 [ceph1][DEBUG ] create a done file to avoid
re-doing
the mon deployment
2014-02-01 14:06:33,944 [ceph1][DEBUG ] create the init path if it
does
not
exist
2014-02-01 14:06:33,946 [ceph1][DEBUG ] locating the `service`
executable...
2014-02-01 14:06:33,949 [ceph1][INFO ] Running command: sudo initctl
emit
ceph-mon cluster=ceph id=ceph1
2014-02-01 14:06:36,119 [ceph1][INFO ] Running command: sudo ceph
--cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok
mon_status
2014-02-01 14:06:36,805 [ceph1][DEBUG ]



******************************************************************************** 2014-02-01 14:06:36,807 [ceph1][DEBUG ] status for monitor: mon.ceph1
2014-02-01 14:06:36,809 [ceph1][DEBUG ] {
2014-02-01 14:06:36,810 [ceph1][DEBUG ] "election_epoch": 1,
2014-02-01 14:06:36,812 [ceph1][DEBUG ] "extra_probe_peers": [],
2014-02-01 14:06:36,813 [ceph1][DEBUG ] "monmap": {
2014-02-01 14:06:36,814 [ceph1][DEBUG ] "created": "0.000000",
2014-02-01 14:06:36,815 [ceph1][DEBUG ] "epoch": 1,
2014-02-01 14:06:36,815 [ceph1][DEBUG ] "fsid":
"26835656-6b29-455d-9d1f-545cad8f1e23",
2014-02-01 14:06:36,816 [ceph1][DEBUG ] "modified": "0.000000",
2014-02-01 14:06:36,816 [ceph1][DEBUG ] "mons": [
2014-02-01 14:06:36,817 [ceph1][DEBUG ]       {
2014-02-01 14:06:36,818 [ceph1][DEBUG ] "addr":
"192.168.111.11:6789/0",
2014-02-01 14:06:36,818 [ceph1][DEBUG ] "name": "ceph1",
2014-02-01 14:06:36,819 [ceph1][DEBUG ] "rank": 0
2014-02-01 14:06:36,820 [ceph1][DEBUG ]       }
2014-02-01 14:06:36,821 [ceph1][DEBUG ]     ]
2014-02-01 14:06:36,822 [ceph1][DEBUG ]   },
2014-02-01 14:06:36,826 [ceph1][DEBUG ]   "name": "ceph1",
2014-02-01 14:06:36,826 [ceph1][DEBUG ] "outside_quorum": [],
2014-02-01 14:06:36,826 [ceph1][DEBUG ] "quorum": [
2014-02-01 14:06:36,827 [ceph1][DEBUG ]     0
2014-02-01 14:06:36,827 [ceph1][DEBUG ]   ],
2014-02-01 14:06:36,827 [ceph1][DEBUG ]   "rank": 0,
2014-02-01 14:06:36,827 [ceph1][DEBUG ]   "state": "leader",
2014-02-01 14:06:36,828 [ceph1][DEBUG ] "sync_provider": []
2014-02-01 14:06:36,828 [ceph1][DEBUG ] }
2014-02-01 14:06:36,828 [ceph1][DEBUG ]



******************************************************************************** 2014-02-01 14:06:36,829 [ceph1][INFO ] monitor: mon.ceph1 is running 2014-02-01 14:06:36,830 [ceph1][INFO ] Running command: sudo ceph
--cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok
mon_status
2014-02-01 14:06:37,005 [ceph_deploy.mon][INFO ] processing monitor
mon.ceph1
2014-02-01 14:06:37,079 [ceph1][DEBUG ] connected to host: ceph1
2014-02-01 14:06:37,081 [ceph1][INFO ] Running command: sudo ceph
--cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok
mon_status
2014-02-01 14:06:37,258 [ceph_deploy.mon][INFO ] mon.ceph1 monitor
has
reached quorum!
2014-02-01 14:06:37,259 [ceph_deploy.mon][INFO  ] all initial
monitors
are
running and have formed quorum
2014-02-01 14:06:37,266 [ceph_deploy.mon][INFO  ] Running
gatherkeys...
2014-02-01 14:06:37,268 [ceph_deploy.gatherkeys][DEBUG ] Checking
ceph1
for
/etc/ceph/ceph.client.admin.keyring
2014-02-01 14:06:37,336 [ceph1][DEBUG ] connected to host: ceph1
2014-02-01 14:06:37,340 [ceph1][DEBUG ] detect platform information
from
remote host
2014-02-01 14:06:37,373 [ceph1][DEBUG ] detect machine type
2014-02-01 14:06:37,383 [ceph1][DEBUG ] fetch remote file

2014-02-01 14:06:37,385 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/etc/ceph/ceph.client.admin.keyring on ['ceph1']
2014-02-01 14:06:37,391 [ceph_deploy.gatherkeys][DEBUG ] Have
ceph.mon.keyring
2014-02-01 14:06:37,398 [ceph_deploy.gatherkeys][DEBUG ] Checking
ceph1
for
/var/lib/ceph/bootstrap-osd/ceph.keyring
2014-02-01 14:06:37,468 [ceph1][DEBUG ] connected to host: ceph1
2014-02-01 14:06:37,471 [ceph1][DEBUG ] detect platform information
from
remote host
2014-02-01 14:06:37,506 [ceph1][DEBUG ] detect machine type
2014-02-01 14:06:37,514 [ceph1][DEBUG ] fetch remote file

2014-02-01 14:06:37,516 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/var/lib/ceph/bootstrap-osd/ceph.keyring on ['ceph1']
2014-02-01 14:06:37,523 [ceph_deploy.gatherkeys][DEBUG ] Checking
ceph1
for
/var/lib/ceph/bootstrap-mds/ceph.keyring
2014-02-01 14:06:37,591 [ceph1][DEBUG ] connected to host: ceph1
2014-02-01 14:06:37,594 [ceph1][DEBUG ] detect platform information
from
remote host
2014-02-01 14:06:37,627 [ceph1][DEBUG ] detect machine type
2014-02-01 14:06:37,636 [ceph1][DEBUG ] fetch remote file

2014-02-01 14:06:37,639 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph1']



------------------------------------------------------------------------------------
Does it end here? seems like the output was trimmed...









(2014/02/03 22:26), Alfredo Deza wrote:
On Sun, Feb 2, 2014 at 12:18 AM, Kei.masumoto
<kei.masumoto@xxxxxxxxx>
wrote:
Hi,

I am newbie of ceph, now I am trying to deploy following
"http://ceph.com/docs/master/start/quick-ceph-deploy/";
ceph1, ceph2 and ceph3 exists according to the above tutorial. I
got
a
WARNING message when I exec ceph-deploy "mon create-initial".

2014-02-01 14:06:37,385 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/etc/ceph/ceph.client.admin.keyring on ['ceph1']
2014-02-01 14:06:37,516 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/var/lib/ceph/bootstrap-osd/ceph.keyring on ['ceph1']
2014-02-01 14:06:37,639 [ceph_deploy.gatherkeys][WARNING] Unable to
find
/var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph1']

Thinking about when those 3 keyrings should be created, I thins
"ceph-deploy mon create " is a right timing for keyring creation. I
checked my environment, and found
/etc/ceph/ceph.client.admin.keyring.14081.tmp. It looks like this
file
is created by ceph-create-keys on executing stop ceph-all && start
ceph-all. but ceph-create-keys never finishes.
ceph-deploy tries to help here a lot with create-initial, and
although
the warnings are useful,
they are only good depending on the context of the rest of the
output.

When the whole process completes, does ceph-deploy say all mons are
up
and running?

It would be better to paste the complete output of the call so we
can
see the details.
When I execute ceph-create-keys manually, it continues to generate
below
log, looks like waiting reply...

2014-02-01 20:13:02.847737 7f55e81a4700  0 -- :/1001774 >>
192.168.11.8:6789/0 pipe(0x7f55e4024400 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f55e4024660).fault

Since I found that mon listens 6789, so I strace mon, then mon also
waiting something...

root@ceph1:~/src/ceph-0.56.7# strace -p 1047
Process 1047 attached - interrupt to quit
futex(0x7f37c14839d0, FUTEX_WAIT, 1102, NULL

I have no idea what situation should be, any hints?

P.S. somebody give me an adivce to check below, but I dont see any
from
here.
root@ceph1:~/my-cluster# ceph daemon mon.`hostname` mon_status
{ "name": "ceph1",
       "rank": 0,
       "state": "leader",
       "election_epoch": 1,
       "quorum": [
             0],
       "outside_quorum": [],
       "extra_probe_peers": [],
       "sync_provider": [],
       "monmap": { "epoch": 1,
           "fsid": "26835656-6b29-455d-9d1f-545cad8f1e23",
           "modified": "0.000000",
           "created": "0.000000",
           "mons": [
                 { "rank": 0,
                   "name": "ceph1",
                   "addr": "192.168.111.11:6789\/0"}]}}


Kei
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux