Luminous "ceph-disk activate" issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hallo,
I am on Jewel 10.2.10 and willing to upgrade to Luminous. I thought I'd proceed same as for the upgrade to Jewel, by running ceph-ansible on OSD nodes one by one, then on MON nodes one by one.
	---> Is this a sensible way to upgrade to Luminous?

Problem: on first OSD node I see that "ceph-disk activate" fails like at the end of this message.

Note that I am using a slightly mofied version of ceph-ansible, which is capable of handling my FibreChannel devices: I just aligned to official ceph-ansible. My changes (https://github.com/fgal/ceph-ansible.git) merely create a "devices" list, and as long as I set
  ceph_stable_release: jewel
ceph-ansible is working OK, so this should exclude both /dev/disk/by-part* stuff and my changes. When I change it to "luminous" I see the problem. I guess the behaviour of ceph-disk has changed meanwhile... I also tried to go back to 12.2.1, last release before ceph-disk was superseded by ceph-volume, and observe the same problem.
Looks to me that the problematic line could be (notice the '-' after -i):
ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ceb1f9f-5cf8-46fc-bf8c-2a905e5238b6

  Anyone has any idea as to what could be the problem?
  Thanks for your help!

			Fulvio


[root@r3srv05.pa1 ~]# ceph-disk -v activate /dev/mapper/3600a0980005da3a20000136058a22992p1

main_activate: path = /dev/mapper/3600a0980005da3a20000136058a22992p1


get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid path is /sys/dev/block/253:25/dm/uuid

get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid is part1-mpath-3600a0980005da3a20000136058a22992





get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid path is /sys/dev/block/253:25/dm/uuid

get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid is part1-mpath-3600a0980005da3a20000136058a22992





command: Running command: /usr/sbin/blkid -o udev -p /dev/mapper/3600a0980005da3a20000136058a22992p1

get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid path is /sys/dev/block/253:25/dm/uuid

get_dm_uuid: get_dm_uuid /dev/mapper/3600a0980005da3a20000136058a22992p1 uuid is part1-mpath-3600a0980005da3a20000136058a22992





command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/mapper/3600a0980005da3a20000136058a22992p1

command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

mount: Mounting /dev/mapper/3600a0980005da3a20000136058a22992p1 on /var/lib/ceph/tmp/mnt.aCTRx9 with options noatime,nodiratime,largeio,inode64,swalloc,logbsize=256k,allocsize=4M command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,nodiratime,largeio,inode64,swalloc,logbsize=256k,allocsize=4M -- /dev/mapper/3600a0980005da3a20000136058a22992p1 /var/lib/ceph/tmp/mnt.aCTRx9
command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.aCTRx9
activate: Cluster uuid is 9a9eedd0-9400-488e-96de-c349fffad7c4
command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
activate: Cluster name is ceph
activate: OSD uuid is 2ceb1f9f-5cf8-46fc-bf8c-2a905e5238b6
allocate_osd_id: Allocating OSD id...
command: Running command: /usr/bin/ceph-authtool --gen-print-key
__init__: stderr
command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ceb1f9f-5cf8-46fc-bf8c-2a905e5238b6
command_with_stdin:
command_with_stdin: no valid command found; 10 closest matches:
osd setmaxosd <int[0-]>
osd pause
osd crush rule rm <name>
osd crush tree
osd crush rule create-simple <name> <root> <type> {firstn|indep}
osd crush rule create-erasure <name> {<profile>}
osd crush get-tunable straw_calc_version
osd crush show-tunables
osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default
osd crush set-tunable straw_calc_version <int>
Error EINVAL: invalid command

mount_activate: Failed to activate
unmount: Unmounting /var/lib/ceph/tmp/mnt.aCTRx9
command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.aCTRx9 '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'2ceb1f9f-5cf8-46fc-bf8c-2a905e5238b6']' failed with status code 22

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux