On 2013-07-24 07:19, Sage Weil wrote:
On Wed, 24 Jul 2013, S?bastien RICCIO wrote:
Hi! While trying to install ceph using ceph-deploy the monitors nodes
are
stuck waiting on this process:
/usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c)
I tried to run mannually the command and it loops on this:
connect to /var/run/ceph/ceph-mon.a.asok failed with (2) No such file
or
directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
But the existing sock on the nodes are
/var/run/ceph/ceph-mon.ceph01.asok
Is that a bug in ceph-deploy or maybe my config file is wrong ?
It's the config file. You no longer need to (or should) enumerate the
daemons in the config file; the sysvinit/upstart scripts find them in
/var/lib/ceph/{osd,mon,mds}/*. See below:
Hi Sage,
Does this also apply if you didn't use ceph-deploy (and used the same
directories for mon, osd etc)? Just curious if there are still any
dependencies or if you still need to list those on clients for instance.
Cheers,
Peter
Version: ceph -v
ceph version 0.61.6 (59ddece17e36fef69ecf40e239aeffad33c9db35)
Note that using "ceph" command line utility on the nodes is working.
So it
looks that it know the good paths...
Config file:
[global]
fsid = a1394dff-94da-4ef4-a123-55d85e839ffb
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 10.111.80.1,10.111.80.2,10.111.80.3
auth_supported = cephx
osd_journal_size = 10000
filestore_xattr_use_omap = true
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
[client]
rbd_cache = true
rbd_cache_size = 536870912
rbd_cache_max_dirty = 134217728
rbd_cache_target_dirty = 33554432
rbd_cache_max_dirty_age = 5
[osd]
osd_data = /var/lib/ceph/osd/ceph-$id
osd_journal = /var/lib/ceph/osd/ceph-$id/journal
osd_journal_size = 10000
osd_mkfs_type = xfs
osd_mkfs_options_xfs = "-f -i size=2048"
osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k"
keyring = /var/lib/ceph/osd/ceph-$id/keyring.osd.$id
osd_op_threads = 24
osd_disk_threads = 24
osd_recovery_max_active = 1
journal_dio = true
journal_aio = true
filestore_max_sync_interval = 100
filestore_min_sync_interval = 50
filestore_queue_max_ops = 2000
filestore_queue_max_bytes = 536870912
filestore_queue_committing_max_ops = 2000
filestore_queue_committing_max_bytes = 536870912
osd_max_backfills = 1
Just drop everything from here...
[osd.0]
host = ceph01
[osd.1]
host = ceph01
[osd.2]
host = ceph02
[osd.3]
host = ceph02
[osd.4]
host = ceph03
[osd.5]
host = ceph03
[mon.a]
host = ceph01
[mon.b]
host = ceph02
[mon.c]
host = ceph03
...to here!
sage
Cheers,
S?bastien
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com