On Thu, Aug 15, 2013 at 7:45 AM, Nico Massenberg <nico.massenberg@xxxxxxxxxxx> wrote:
Hello there,
I am deploying a development system with 3 hosts. I want to deploy a monitor on each of those hosts and several osds, 1 per disk.
In addition I have an admin machine to use ceph-deploy from. So far I have 1 mon on ceph01 and a total of 6 osds on ceph01 and ceph02 in a healthy cluster:
ceph@vl0181:~/konkluster$ ceph -s -k ceph.client.admin.keyring
health HEALTH_OK
monmap e1: 1 mons at {ceph01=192.168.111.10:6789/0}, election epoch 1, quorum 0 ceph01
osdmap e78: 6 osds: 6 up, 6 in
pgmap v248: 192 pgs: 192 active+clean; 0 bytes data, 211 MB used, 3854 GB / 3854 GB avail
mdsmap e1: 0/0/1 up
When trying to add mon2 and mon3 to ceph02 and ceph03 I am confronted with the following error:
ceph@vl0181:~/konkluster$ ceph-deploy mon create ceph02
ceph-mon: set fsid to 3dad736b-a9fc-42bf-a2fb-399cb8cbb880
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph03 for mon.ceph02
=== mon.ceph02 ===
Starting Ceph mon.ceph02 on ceph02...
failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i ceph02 --pid-file /var/run/ceph/mon.ceph03.pid -c /etc/ceph/ceph.conf '
Starting ceph-create-keys on ceph02...
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 21, in <module>
ps aux | grep ceph on the target afterwards shows a quiet unusual output:
root@ceph02:~# ps aux |grep ceph
root 2501 0.1 0.0 26652 6952 ? S 11:47 0:08 /usr/bin/python /usr/sbin/ceph-create-keys -i ceph02
root 2677 0.0 0.1 413448 17324 ? Ssl 11:47 0:04 /usr/bin/ceph-osd -i 5 --pid-file /var/run/ceph/osd.5.pid -c /etc/ceph/ceph.conf
root 2684 0.0 0.0 4096 612 ? Ss 11:47 0:00 startpar -f -- ceph
root 4069 0.0 0.0 71172 3564 ? Ss 11:53 0:00 sshd: ceph [priv]
ceph 4071 0.0 0.0 71536 1804 ? S 11:53 0:00 sshd: ceph@notty
ceph 4072 0.0 0.0 4176 580 ? Ss 11:53 0:00 sh -c "sudo" "python" "-u" "-c" "exec reduce(lambda a,b: a+b, map(chr, (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,32,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,108,101,110,111,40,
41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,32,32,32,32,105,109,112,111,114,116,32,109,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,121,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,50,48,48,49,57,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,41,10,119,104,105,108,101,32,108
,101,110,40,115,101,114,118,101,114,83,111,117,114,99,101,41,32,60,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,58,10,32,32,32,32,115,101,114,118,101,114,83,111,117,114,99,101,32,43,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,45,32,108,101,110,40,115,101,114,118,101,114,83,111,117,114,99,101,41,41,10,10,116,114,121,58,10,32,32,32,32,97,115,115,101,114,116,32,104,97,115,104,108,105,98,46,109,100,53,40,115,101,114,118,101,114,83,111,117,114,99,101,41,46,100,105,103,101,115,116,40,41,32,61,61,32,39,92,120,97,99,92,120,100,49,57,92,120,98,51,92,120,100,48,92,120,101,97,100,92,120,100,54,108,92,120,56,100,92,120,101,48,75,47,92,120,57,99,41,105,39,10,32,32,32,32,95,95,98,117,105,108,116,105,110,95,95,46,112,117,115,104,121,95,115,111,117,114,99,101,32,61,32,115,101,114,118,101,114,83,111,117,114,99,101,10,32,32,32,32,115,101,114,118,101,114,67,111,100,101,32,61,32,109,97,114,115,
104,97,108,46,108,111,97,100,115,40,115,101,114,118,101,114,83,111,117,114,99,101,41,10,32,32,32,32,101,120,101,99,32,115,101,114,118,101,114,67,111,100,101,10,32,32,32,32,112,117,115,104,121,95,115,101,114,118,101,114,40,115,116,100,105,110,44,32,115,116,100,111,117,116,41,10,101,120,99,101,112,116,58,10,32,32,32,32,105,109,112,111,114,116,32,116,114,97,99,101,98,97,99,107,10,32,32,32,32,35,32,85,110,99,111,109,109,101,110,116,32,102,111,114,32,100,101,98,117,103,103,105,110,103,10,32,32,32,32,35,32,116,114,97,99,101,98,97,99,107,46,112,114,105,110,116,95,101,120,99,40,102,105,108,101,61,111,112,101,110,40,34,115,116,100,101,114,114,46,116,120,116,34,44,32,34,119,34,41,41,10,32,32,32,32,114,97,105,115,101)))"
root 4190 0.1 0.0 36608 7176 ? S 11:53 0:07 /usr/bin/python /usr/sbin/ceph-create-keys -i ceph02
No problems occurred on initial mon setup.
All hosts run on Debian Squeeze with a 3.6 Kernel. Cuttlefish stable release.
Any ideas would be highly appreciated!
Hi Nico,
We are currently dealing with this issue directly, (see ticket: http://tracker.ceph.com/issues/5895). We are having problems with the upstream library ceph-deploy uses to do remote calls.
I'm sure however, that the process to create the mon completes correctly, except that `ceph-create-keys` might take a while to complete.
We should have a resolution for this specific problem today and a following ceph-deploy release soon after.
Thanks for letting us know!
Thanks for letting us know!
Thanks, Nico
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com