Re: OSD activate Error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd guess you previously removed an osd.0 but forgot to perform 'ceph auth del osd.0'

'ceph auth list' might show some other stray certs.

Bob

On Mon, Apr 4, 2016 at 9:52 PM, <zainal@xxxxxxxxxx> wrote:

Hi,

 

I keep getting this error while try to activate:

 

[root@mon01 ceph]# ceph-deploy osd prepare osd01:sdc:/dev/sde1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd prepare osd01:sdc:/dev/sde1

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  disk                          : [('osd01', '/dev/sdc', '/dev/sde1')]

[ceph_deploy.cli][INFO  ]  dmcrypt                       : False

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : prepare

[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xb72cb0>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  fs_type                       : xfs

[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xb67320>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.cli][INFO  ]  zap_disk                      : False

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks osd01:/dev/sdc:/dev/sde1

[osd01][DEBUG ] connected to host: osd01

[osd01][DEBUG ] detect platform information from remote host

[osd01][DEBUG ] detect machine type

[osd01][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to osd01

[osd01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.osd][DEBUG ] Preparing host osd01 disk /dev/sdc journal /dev/sde1 activate False

[osd01][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdc /dev/sde1

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sde1 uuid path is /sys/dev/block/8:65/dm/uuid

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sde1 uuid path is /sys/dev/block/8:65/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:Journal /dev/sde1 is a partition

[osd01][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/blkid -o udev -p /dev/sde1

[osd01][WARNIN] WARNING:ceph-disk:Journal /dev/sde1 was not prepared with ceph-disk. Symlinking directly.

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdc

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:de03fdc6-db34-46bd-ae98-ed3c7093b0b4 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc

[osd01][DEBUG ] The operation has completed successfully.

[osd01][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdc

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle --timeout=600

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdc

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle --timeout=600

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdc1

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1

[osd01][DEBUG ] meta-data="" isize=2048   agcount=4, agsize=183141597 blks

[osd01][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1

[osd01][DEBUG ]          =                       crc=0        finobt=0

[osd01][DEBUG ] data     =                       bsize=4096   blocks=732566385, imaxpct=5

[osd01][DEBUG ]          =                       sunit=0      swidth=0 blks

[osd01][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

[osd01][DEBUG ] log      =internal log           bsize=4096   blocks=357698, version=2

[osd01][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1

[osd01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.so2U0K with options noatime,inode64

[osd01][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.so2U0K/journal -> /dev/sde1

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.so2U0K/ceph_fsid.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.so2U0K/ceph_fsid.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.so2U0K/fsid.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.so2U0K/fsid.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.so2U0K/magic.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.so2U0K/magic.4300.tmp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.so2U0K

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc

[osd01][DEBUG ] The operation has completed successfully.

[osd01][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdc

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle --timeout=600

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdc

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle --timeout=600

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm trigger --action="" --sysname-match sdc1

[osd01][INFO  ] checking OSD status...

[osd01][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host osd01 is now ready for osd use.

[root@mon01 ceph]# ceph-deploy osd activate osd01:/dev/sdc1:/dev/sde1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd activate osd01:/dev/sdc1:/dev/sde1

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : activate

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f15af860cb0>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f15af855320>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.cli][INFO  ]  disk                          : [('osd01', '/dev/sdc1', '/dev/sde1')]

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd01:/dev/sdc1:/dev/sde1

[osd01][DEBUG ] connected to host: osd01

[osd01][DEBUG ] detect platform information from remote host

[osd01][DEBUG ] detect machine type

[osd01][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core

[ceph_deploy.osd][DEBUG ] activating host osd01 disk /dev/sdc1

[ceph_deploy.osd][DEBUG ] will use init type: systemd

[osd01][INFO  ] Running command: ceph-disk -v activate --mark-init systemd --mount /dev/sdc1

[osd01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/blkid -o udev -p /dev/sdc1

[osd01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdc1

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[osd01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.IkLtKp with options noatime,inode64

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.IkLtKp

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.IkLtKp

[osd01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 3e77a4d7-0813-47f8-8a4b-33392effdf99

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[osd01][WARNIN] DEBUG:ceph-disk:Cluster name is ceph

[osd01][WARNIN] DEBUG:ceph-disk:OSD uuid is de03fdc6-db34-46bd-ae98-ed3c7093b0b4

[osd01][WARNIN] DEBUG:ceph-disk:OSD id is 0

[osd01][WARNIN] DEBUG:ceph-disk:Marking with init system systemd

[osd01][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...

[osd01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /var/lib/ceph/tmp/mnt.IkLtKp/keyring osd allow * mon allow profile osd

[osd01][WARNIN] Error EINVAL: entity osd.0 exists but key does not match

[osd01][WARNIN] ERROR:ceph-disk:Failed to activate

[osd01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.IkLtKp

[osd01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.IkLtKp

[osd01][WARNIN] Traceback (most recent call last):

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 3589, in <module>

[osd01][WARNIN]     main(sys.argv[1:])

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 3543, in main

[osd01][WARNIN]     args.func(args)

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2438, in main_activate

[osd01][WARNIN]     dmcrypt_key_dir=args.dmcrypt_key_dir,

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2211, in mount_activate

[osd01][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2406, in activate

[osd01][WARNIN]     keyring=keyring,

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 1988, in auth_key

[osd01][WARNIN]     'mon', 'allow profile osd',

[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 350, in command_check_call

[osd01][WARNIN]     return subprocess.check_call(arguments)

[osd01][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call

[osd01][WARNIN]     raise CalledProcessError(retcode, cmd)

[osd01][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.0', '-i', '/var/lib/ceph/tmp/mnt.IkLtKp/keyring', 'osd', 'allow *', 'mon', 'allow profile osd']' returned non-zero exit status 22

[osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init systemd --mount /dev/sdc1

 

[root@mon01 ceph]# ceph -s

    cluster 3e77a4d7-0813-47f8-8a4b-33392effdf99

     health HEALTH_WARN

            64 pgs stuck inactive

            64 pgs stuck unclean

     monmap e1: 2 mons at {mon01=42.0.30.38:6789/0,mon02=42.0.30.39:6789/0}

            election epoch 4, quorum 0,1 mon01,mon02

     osdmap e5: 1 osds: 0 up, 0 in

            flags sortbitwise

      pgmap v6: 64 pgs, 1 pools, 0 bytes data, 0 objects

            0 kB used, 0 kB / 0 kB avail

                  64 creating

[root@mon01 ceph]# ceph osd tree

ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1      0 root default

0      0 osd.0           down        0          1.00000

[root@mon01 ceph]#

 

Regards,

 

Mohd Zainal Abidin Rabani

Technical Support

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux