Ceph Setup Woes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

 

Looking for some support on installation.  I've followed the installation
guide located on the main website.  I have a virtual server that I will be
using for the admin node and first monitor (ceph-master), three physical
servers for OSD usage, two will be used for monitors (ceph1, ceph2 & ceph3).
All four of the operating systems are CentOS 6.5 x64.  Here are the software
versions:

 

[root at ceph-master ~]# rpm -qa | grep ceph

ceph-deploy-1.5.7-0.noarch

ceph-0.80.1-0.el6.x86_64

libcephfs1-0.80.1-0.el6.x86_64

python-ceph-0.80.1-0.el6.x86_64

ceph-release-1-0.el6.noarch

 

 

Here are some of the issues I am seeing, any assistance/guidance would be
appreciated:

 

/////////////////

[ceph at ceph-master cluster]$ ceph-deploy new ceph-master ceph1 ceph2 ceph3

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

Unhandled exception in thread started by

Error in sys.excepthook:

 

Original exception was:

 

/////////////////

 

The command ceph status also produces an error.  This was run after I
installed the monitors:

 

/////////////////

[ceph at ceph-master cluster]$ ceph status

2014-07-06 20:06:27.114990 7f1f74ba0700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication

2014-07-06 20:06:27.115021 7f1f74ba0700  0 librados: client.admin
initialization error (2) No such file or directory

Error connecting to cluster: ObjectNotFound

 

[ceph at ceph-master root]$ service ceph status

=== mon.ceph-master ===

mon.ceph-master: running failed: '/usr/bin/ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-master.asok version 2>/dev/null'

/////////////////

 

 

I am also not able to get the other monitors to show up on the master node
when running the ceph status.  If I log onto each of the nodes, it seems as
if the monitors are running.

 

/////////////////

During OSD creation I am getting the following error, this is after I run
the prepare command:

 

[ceph at ceph-master cluster]$ ceph-deploy osd prepare ceph3:/dev/sdc

[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.7): /usr/bin/ceph-deploy osd prepare
ceph3:/dev/sdc

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph3:/dev/sdc:

[ceph3][DEBUG ] connected to host: ceph3

[ceph3][DEBUG ] detect platform information from remote host

[ceph3][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to ceph3

[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph3][INFO  ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host ceph3 disk /dev/sdc journal None
activate False

[ceph3][INFO  ] Running command: sudo ceph-disk-prepare --fs-type xfs
--cluster ceph -- /dev/sdc

[ceph3][DEBUG ] Information: Moved requested sector from 34 to 2048 in

[ceph3][DEBUG ] order to align on 2048-sector boundaries.

[ceph3][DEBUG ] The operation has completed successfully.

[ceph3][DEBUG ] Information: Moved requested sector from 10485761 to
10487808 in

[ceph3][DEBUG ] order to align on 2048-sector boundaries.

[ceph3][DEBUG ] The operation has completed successfully.

[ceph3][DEBUG ] meta-data=/dev/sdc1              isize=2048   agcount=4,
agsize=121766917 blks

[ceph3][DEBUG ]          =                       sectsz=512   attr=2,
projid32bit=0

[ceph3][DEBUG ] data     =                       bsize=4096
blocks=487067665, imaxpct=5

[ceph3][DEBUG ]          =                       sunit=0      swidth=0 blks

[ceph3][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0

[ceph3][DEBUG ] log      =internal log           bsize=4096   blocks=237826,
version=2

[ceph3][DEBUG ]          =                       sectsz=512   sunit=0 blks,
lazy-count=1

[ceph3][DEBUG ] realtime =none                   extsz=4096   blocks=0,
rtextents=0

[ceph3][DEBUG ] The operation has completed successfully.

[ceph3][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdc

[ceph3][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdc

[ceph3][WARNIN] INFO:ceph-disk:re-reading known partitions will display
errors

[ceph3][WARNIN] BLKPG: Device or resource busy

[ceph3][WARNIN] error adding partition 2

[ceph3][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdc

[ceph3][WARNIN] INFO:ceph-disk:re-reading known partitions will display
errors

[ceph3][WARNIN] BLKPG: Device or resource busy

[ceph3][WARNIN] error adding partition 1

[ceph3][WARNIN] BLKPG: Device or resource busy

[ceph3][WARNIN] error adding partition 2

[ceph3][INFO  ] checking OSD status...

[ceph3][INFO  ] Running command: sudo ceph --cluster=ceph osd stat
--format=json

[ceph_deploy.osd][DEBUG ] Host ceph3 is now ready for osd use.

[ceph at ceph-master cluster]$ ceph-deploy osd activate ceph3:/dev/sdc

[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.7): /usr/bin/ceph-deploy osd activate
ceph3:/dev/sdc

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph3:/dev/sdc:

[ceph3][DEBUG ] connected to host: ceph3

[ceph3][DEBUG ] detect platform information from remote host

[ceph3][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] activating host ceph3 disk /dev/sdc

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[ceph3][INFO  ] Running command: sudo ceph-disk-activate --mark-init
sysvinit --mount /dev/sdc

[ceph3][WARNIN] ceph-disk: Cannot discover filesystem type: device /dev/sdc:
Line is truncated:

[ceph3][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init sysvinit --mount /dev/sdc

/////////////////

 

Any tips for getting the install to function properly?

 

Regards,

Chris

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140706/7c09c614/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux