Re: Recreating the OSD's with same ID does not seem to work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's still creating and storing keys in case you enable it later.
That's exactly what the error is telling you and that's why it's not
working.

On Fri, Nov 14, 2014 at 4:45 PM, JIten Shah <jshah2005@xxxxxx> wrote:
> But I am not using “cephx” for authentication. I have already disabled that.
>
> —Jiten
>
> On Nov 14, 2014, at 4:44 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>
>> You didn't remove them from the auth monitor's keyring. If you're
>> removing OSDs you need to follow the steps in the documentation.
>> -Greg
>>
>> On Fri, Nov 14, 2014 at 4:42 PM, JIten Shah <jshah2005@xxxxxx> wrote:
>>> Hi Guys,
>>>
>>> I had to rekick some of the hosts where OSD’s were running and after
>>> re-kick, when I try to run puppet and install OSD’s again, it gives me a key
>>> mismatch error (as below). After the hosts were shutdown for rekick, I
>>> removed the OSD’s from the osd tree and the crush map too. Why is it still
>>> tied to the old key?
>>>
>>>
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + test -b /osd-data
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + mkdir -p /osd-data
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + ceph-disk prepare /osd-data
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + test -b /osd-data
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + ceph-disk activate /osd-data
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> got monmap epoch 2
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:25.951783 7ffff7fe67a0 -1 journal FileJournal::_open:
>>> disabling aio for non-block journal.  Use journal_force_aio to force use of
>>> aio anyway
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:26.023037 7ffff7fe67a0 -1 journal FileJournal::_open:
>>> disabling aio for non-block journal.  Use journal_force_aio to force use of
>>> aio anyway
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:26.023809 7ffff7fe67a0 -1 filestore(/osd-data) could not
>>> find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:26.032044 7ffff7fe67a0 -1 created object store /osd-data
>>> journal /osd-data/journal for osd.2 fsid
>>> 2e738cda-1930-48cd-a4b1-74bc737c5d56
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:26.032097 7ffff7fe67a0 -1 auth: error reading file:
>>> /osd-data/keyring: can't open /osd-data/keyring: (2) No such file or
>>> directory
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 2014-11-15 00:31:26.032189 7ffff7fe67a0 -1 created new key in keyring
>>> /osd-data/keyring
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> Error EINVAL: entity osd.2 exists but key does not match
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> Traceback (most recent call last):
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 2591, in <module>
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> main()
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 2569, in main
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> args.func(args)
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 1929, in main_activate
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> init=args.mark_init,
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 1761, in activate_dir
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> (osd_id, cluster) = activate(path, activate_key_template, init)
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 1897, in activate
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> keyring=keyring,
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 1520, in auth_key
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> 'mon', 'allow profile osd',
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/sbin/ceph-disk", line 304, in command_check_call
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> return subprocess.check_call(arguments)
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> raise CalledProcessError(retcode, cmd)
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> subprocess.CalledProcessError: Command '['/usr/bin/ceph', '--cluster',
>>> 'ceph', '--name', 'client.bootstrap-osd', '--keyring',
>>> '/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.2', '-i',
>>> '/osd-data/keyring', 'osd', 'allow *', 'mon', 'allow profile osd']' returned
>>> non-zero exit status 22
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> + true
>>> Notice:
>>> /Stage[main]/Main/Node[infrastructure_node]/Ceph::Osd[/osd-data]/Exec[ceph-osd-mkfs-/osd-data]/returns:
>>> executed successfully
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux