Setup:
hosts: ceph1, ceph2
Command steps:
$ ceph-deploy new ceph1
$ ceph-deploy mon create ceph1
$ ceph-deploy gatherkeys ceph1
$ ceph-deploy disk zap ceph1:/dev/vdb
$ ceph-deploy disk zap ceph1:/dev/vdc
$ ceph-deploy disk zap ceph2:/dev/vdb
$ ceph-deploy disk zap ceph2:/dev/vdc
$ ceph-deploy osd create ceph1:/dev/vdb:/dev/vdc
$ ceph-deploy osd create ceph2:/dev/vdb:/dev/vdc
The last command complains:
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph2:/dev/vdb:/dev/vdc
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph2][INFO ] keyring file does not exist, creating one at:
/var/lib/ceph/bootstrap-osd/ceph.keyring
[ceph2][INFO ] create mon keyring file
[ceph2][ERROR ] Traceback (most recent call last):
[ceph2][ERROR ] File
"/home/markir/develop/python/ceph-deploy/ceph_deploy/util/decorators.py", line
10, in inner
[ceph2][ERROR ] File
"/home/markir/develop/python/ceph-deploy/ceph_deploy/osd.py", line 14,
in write_keyring
[ceph2][ERROR ] NameError: global name 'key' is not defined
[ceph2][INFO ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph2 disk /dev/vdb journal
/dev/vdc activate True
[ceph2][INFO ] Running command: ceph-disk-prepare --cluster ceph --
/dev/vdb /dev/vdc
[ceph2][INFO ] Information: Moved requested sector from 34 to 2048 in
[ceph2][INFO ] order to align on 2048-sector boundaries.
[ceph2][INFO ] The operation has completed successfully.
[ceph2][INFO ] Information: Moved requested sector from 34 to 2048 in
[ceph2][INFO ] order to align on 2048-sector boundaries.
[ceph2][INFO ] The operation has completed successfully.
[ceph2][INFO ] meta-data=/dev/vdb1 isize=2048 agcount=4,
agsize=327615 blks
[ceph2][INFO ] = sectsz=512 attr=2,
projid32bit=0
[ceph2][INFO ] data = bsize=4096
blocks=1310459, imaxpct=25
[ceph2][INFO ] = sunit=0 swidth=0 blks
[ceph2][INFO ] naming =version 2 bsize=4096 ascii-ci=0
[ceph2][INFO ] log =internal log bsize=4096 blocks=2560,
version=2
[ceph2][INFO ] = sectsz=512 sunit=0
blks, lazy-count=1
[ceph2][INFO ] realtime =none extsz=4096 blocks=0,
rtextents=0
[ceph2][INFO ] The operation has completed successfully.
[ceph2][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if
journal is not the same device as the osd data
[ceph2][INFO ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Host ceph2 is now ready for osd use.
And the osd is not created successfully:
$ ceph -w
cluster 66b96359-771c-467e-9f9a-060d82ab6a0c
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {ceph1=192.168.122.21:6789/0}, election epoch
2, quorum 0 ceph1
osdmap e5: 1 osds: 1 up, 1 in
pgmap v9: 192 pgs: 192 active+degraded; 0 bytes data, 34964 KB
used, 5074 MB / 5108 MB avail
mdsmap e1: 0/0/1 up
Regards
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html