Re: ceph octopus mysterious OSD crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I made *some* progress for cleanup.
I could already do "ceph osd rm 33" from my master. But doing the cleanup on the actual OSD node was problematical.

ceph-volume lvm zap xxx

wasnt working properly.. because the device wasnt fully released.... because at the regular OS level, it cant even SEE the VGs??
That caught me by surprise.
But doing   cephadm shell   let me see the vgs, remove it,  and thus have the zap work.

so now we move on to reconstructing the hybrid OSD.
First off, by default, the cephadm shell did not have permission to create OSDs. so I had to do
[ceph: root@dxxxx /]#  ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring

Unfortunately, since I had run the lvm zap on both the data /dev/sdX, AND the db lv partition.. attempting to recreate
the OSD with 
ceph-volume lvm prepare --data  /dev/sdb --block.db /dev/ceph-xx-xx-xx/osd-db-xxxx

(the original db lvm on SSD, which still technically existed)

FAILED, because
  -->   blkid could not detect a PARTUUID for device: /dev/ceph-xxxx/osd-xxx
  --> Was unable to complete a new OSD, will rollback changes


cmon.... just MAKE one for me???

:-(

Happily, i could grep for  osd-db-specific-id-here in /var/log/ceph/ceph-volume.log  and found the exact original lvcreate syntax to remake it.
BUT....
lvm prepare once again complained about not detecting a PARTUUID.
I think there may be a command to do that, that is left out of the ceph-volume.log   :(


So.. now what can I do?



----- Original Message -----
From: "Stefan Kooman" <stefan@xxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Friday, March 19, 2021 9:58:56 AM
Subject: Re:  ceph octopus mysterious OSD crash

On 3/19/21 3:53 PM, Philip Brown wrote:
> mkay.
> Sooo... what's the new and nifty proper way to clean this up?
> The outsider's view is,
> "I should just be able to run   'ceph orch osd rm 33'"

Can you spawn a cephadm shell and run: ceph osd rm 33?

And / or: ceph osd crush rm 33, or try to do it with cephadm. Does this 
work: ceph orch osd crush rm 33?

Gr. Stefan

P.s. I'll have to install an octopus release with cephadm to get myself 
up to speed here.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux