Need help with Ceph Firefly install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am at the point of the install where I am creating 2 osd's and it is failing.

My setup is 1 "head node" (mon, admin) and 2 osd nodes all 3 are
running Scientific Linux 6.5.

I am following the quick deploy guide found here [
ceph.com/docs/master/start/quick-ceph-deploy/ ]

I get to the point (with a little tweaking, i.e creating a few dirs
under /var/lib/ceph on the boxes) where I activate the newly created
osds when I run into this issue:

[ceph at cephmon0 kcrn_ceph_cluster]$ ceph-deploy -v osd activate
cephosd0:/dev/sda1:/dev/sda2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy -v osd
activate cephosd0:/dev/sda1:/dev/sda2
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
cephosd0:/dev/sda1:/dev/sda2
[cephosd0][DEBUG ] connected to host: cephosd0
[cephosd0][DEBUG ] detect platform information from remote host
[cephosd0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Scientific Linux 6.5 Carbon
[ceph_deploy.osd][DEBUG ] activating host cephosd0 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[cephosd0][INFO  ] Running command: sudo ceph-disk-activate
--mark-init sysvinit --mount /dev/sda1
[cephosd0][WARNIN] got monmap epoch 1
[cephosd0][WARNIN] 2014-07-30 17:09:23.767813 7fc12f84a7a0 -1
filestore(/var/lib/ceph/tmp/mnt.S5pmf6) mkjournal error creating
journal on /var/lib/ceph/tmp/mnt.S5pmf6/journal: (2) No such file or
directory
[cephosd0][WARNIN] 2014-07-30 17:09:23.767851 7fc12f84a7a0 -1
OSD::mkfs: ObjectStore::mkfs failed with error -2
[cephosd0][WARNIN] 2014-07-30 17:09:23.767903 7fc12f84a7a0 -1  **
ERROR: error creating empty object store in
/var/lib/ceph/tmp/mnt.S5pmf6: (2) No such file or directory
[cephosd0][WARNIN] ERROR:ceph-disk:Failed to activate
[cephosd0][WARNIN] Traceback (most recent call last):
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 2592, in <module>
[cephosd0][WARNIN]     main()
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 2570, in main
[cephosd0][WARNIN]     args.func(args)
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1922, in main_activate
[cephosd0][WARNIN]     init=args.mark_init,
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1698, in mount_activate
[cephosd0][WARNIN]     (osd_id, cluster) = activate(path,
activate_key_template, init)
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1861, in activate
[cephosd0][WARNIN]     keyring=keyring,
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1496, in mkfs
[cephosd0][WARNIN]     '--keyring', os.path.join(path, 'keyring'),
[cephosd0][WARNIN]   File "/usr/sbin/ceph-disk", line 303, in command_check_call
[cephosd0][WARNIN]     return subprocess.check_call(arguments)
[cephosd0][WARNIN]   File "/usr/lib64/python2.6/subprocess.py", line
505, in check_call
[cephosd0][WARNIN]     raise CalledProcessError(retcode, cmd)
[cephosd0][WARNIN] subprocess.CalledProcessError: Command
'['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i',
'0', '--monmap', '/var/lib/ceph/tmp/mnt.S5pmf6/activate.monmap',
'--osd-data', '/var/lib/ceph/tmp/mnt.S5pmf6', '--osd-journal',
'/var/lib/ceph/tmp/mnt.S5pmf6/journal', '--osd-uuid',
'4af35aac-051c-4721-829d-f16599b259a8', '--keyring',
'/var/lib/ceph/tmp/mnt.S5pmf6/keyring']' returned non-zero exit status
1
[cephosd0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init sysvinit --mount /dev/sda1


firewalls are turned off, all of the nodes have the
ceph.client.admin.keyring, and selinux is disabled

Listing of ceph-deploy disk list:

[ceph at cephmon0 kcrn_ceph_cluster]$ ceph-deploy disk list kcrni2cephosd0
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy disk
list kcrni2cephosd0
[cephosd0][DEBUG ] connected to host: kcrni2cephosd0
[cephosd0][DEBUG ] detect platform information from remote host
[cephosd0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Scientific Linux 6.5 Carbon
[ceph_deploy.osd][DEBUG ] Listing disks on kcrni2cephosd0...
[cephosd0][DEBUG ] find the location of an executable
[cephosd0][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[cephosd0][DEBUG ] /dev/sda :
[cephosd0][DEBUG ]  /dev/sda1 ceph data, prepared, cluster ceph,
osd.0, journal /dev/sda2
[cephosd0][DEBUG ]  /dev/sda2 ceph journal, for /dev/sda1
[cephosd0][DEBUG ] /dev/sdb :
[cephosd0][DEBUG ]  /dev/sdb1 other, ext4, mounted on /
[cephosd0][DEBUG ]  /dev/sdb2 swap, swap
[cephosd0][DEBUG ]  /dev/sdb3 other, LVM2_member
[cephosd0][DEBUG ] /dev/sdc other, unknown
[cephosd0][DEBUG ] /dev/sdd other, unknown
[cephosd0][DEBUG ] /dev/sr0 other, unknown

any pointers would be really appreciated.

thanks,

almightybeeij


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux