infernalis osd activation on centos 7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

 

After managing to get the mons up, I am stuck at activating the osds with the error below

 

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/bin/ceph-deploy disk activate osd01:sdb1:sdb2

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : activate

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb9156763b0>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fb91566d398>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.cli][INFO  ]  disk                          : [('osd01', '/dev/sdb1', '/dev/sdb2')]

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd01:/dev/sdb1:/dev/sdb2

[osd01][DEBUG ] connection detected need for sudo

[osd01][DEBUG ] connected to host: osd01

[osd01][DEBUG ] detect platform information from remote host

[osd01][DEBUG ] detect machine type

[osd01][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core

[ceph_deploy.osd][DEBUG ] activating host osd01 disk /dev/sdb1

[ceph_deploy.osd][DEBUG ] will use init type: systemd

[osd01][INFO  ] Running command: sudo ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid

[osd01][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk -i 1 /dev/sdb

[osd01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[osd01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.Ng38c4 with options noatime,inode64

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.Ng38c4

[osd01][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.Ng38c4

[osd01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 0c36d242-92a9-4331-b48d-ce07b628750a

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[osd01][WARNIN] ERROR:ceph-disk:Failed to activate

[osd01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.Ng38c4

[osd01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Ng38c4

[osd01][WARNIN] Traceback (most recent call last):

[osd01][WARNIN]   File "/sbin/ceph-disk", line 3576, in <module>

[osd01][WARNIN]     main(sys.argv[1:])

[osd01][WARNIN]   File "/sbin/ceph-disk", line 3530, in main

[osd01][WARNIN]     args.func(args)

[osd01][WARNIN]   File "/sbin/ceph-disk", line 2424, in main_activate

[osd01][WARNIN]     dmcrypt_key_dir=args.dmcrypt_key_dir,

[osd01][WARNIN]   File "/sbin/ceph-disk", line 2197, in mount_activate

[osd01][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)

[osd01][WARNIN]   File "/sbin/ceph-disk", line 2331, in activate

[osd01][WARNIN]     raise Error('No cluster conf found in ' + SYSCONFDIR + ' with fsid %s' % ceph_fsid)

[osd01][WARNIN] __main__.Error: Error: No cluster conf found in /etc/ceph with fsid 0c36d242-92a9-4331-b48d-ce07b628750a

[osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

 

Why do I get no cluster conf ?

 

[ceph@osd01 ~]$ ll /etc/ceph/

total 12

-rw------- 1 ceph ceph  63 Dec  2 10:30 ceph.client.admin.keyring

-rw-r--r-- 1 ceph ceph 270 Dec  2 10:31 ceph.conf

-rwxr-xr-x 1 ceph ceph  92 Nov 10 07:06 rbdmap

-rw------- 1 ceph ceph   0 Dec  2 10:30 tmp0jJPo4

 

[ceph@osd01 ~]$ cat /etc/ceph/ceph.conf

[global]

fsid = 0e906cd0-81f1-412c-a3aa-3866192a2de7

mon_initial_members = cmon01, cmon02, cmon03

mon_host = 10.8.250.249,10.8.250.248,10.8.250.247

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

 

why is it looking for other fsid than in the ceph.conf ?

 

Thanks,
Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux