Ceph Firefly on Centos 6.5 cannot deploy osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have just started to dabble with ceph - went thru the docs
http://ceph.com/howto/deploying-ceph-with-ceph-deploy/


I have a 3 node setup with 2 nodes for OSD

I use ceph-deploy mechanism.

The ceph init scripts expects that cluster.conf  to be ceph.conf . If I
give any other name the init scripts dont work. So for test purpose Im
using  ceph.conf


--ceph.conf--
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 172.18.1.31,172.18.1.32,172.18.1.33
mon_initial_members = cc01, cc02, cc03
fsid = b58e50f1-13a3-4b14-9cff-32b6edd851c9
--snip--

I managed to get mon deployed but ceph -s returns health error

--snip--
 ceph -s
    cluster b58e50f1-13a3-4b14-9cff-32b6edd851c9
     health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
osds
     monmap e1: 3 mons at {cc01=
172.18.1.31:6789/0,cc02=172.18.1.32:6789/0,cc03=172.18.1.33:6789/0},
election epoch 4, quorum 0,1,2 cc01,cc02,cc03
     osdmap e1: 0 osds: 0 up, 0 in
      pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 192 creating
--snip--

I tried creating two osds. Well they fail too probably has to do with
health error message.

 --snip--
 ceph-deploy osd create cc01:/dev/sdb cc02:/dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.2): /usr/bin/ceph-deploy osd create
cc01:/dev/sdb cc02:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cc01:/dev/sdb:
cc02:/dev/sdb:
[cc01][DEBUG ] connected to host: cc01
[cc01][DEBUG ] detect platform information from remote host
[cc01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to cc01
[cc01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cc01][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host cc01 disk /dev/sdb journal None
activate True
[cc01][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster
ceph -- /dev/sdb
[cc01][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[cc01][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cc01][WARNIN] Error encountered; not saving changes.
[cc01][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk',
'--new=2:0:5120M', '--change-name=2:ceph journal',
'--partition-guid=2:d882631c-0069-4238-86df-9762ad478daa',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdb']' returned non-zero exit status 4
[cc01][DEBUG ] Setting name!
[cc01][DEBUG ] partNum is 1
[cc01][DEBUG ] REALLY setting name!
[cc01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--fs-type xfs --cluster ceph -- /dev/sdb
[cc02][DEBUG ] connected to host: cc02
[cc02][DEBUG ] detect platform information from remote host
[cc02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to cc02
[cc02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cc02][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host cc02 disk /dev/sdb journal None
activate True
[cc02][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster
ceph -- /dev/sdb
[cc02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[cc02][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cc02][WARNIN] Error encountered; not saving changes.
[cc02][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk',
'--new=2:0:5120M', '--change-name=2:ceph journal',
'--partition-guid=2:486c9081-a73c-4906-b97a-c03458feba26',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdb']' returned non-zero exit status 4
[cc02][DEBUG ] Found valid GPT with corrupt MBR; using GPT and will write
new
[cc02][DEBUG ] protective MBR on save.
[cc02][DEBUG ] Setting name!
[cc02][DEBUG ] partNum is 1
[cc02][DEBUG ] REALLY setting name!
[cc02][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--fs-type xfs --cluster ceph -- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
--snip--

Any pointers to fix the issue.

Cheers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140521/dc2e1427/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux