Re: Device is not available after zap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To update, the OSD had data on HDD and DB on SSD.
After "ceph orch osd rm 12 --replace --force" and wait
till rebalancing is done and daemon is stopped,
I ran "ceph orch device zap ceph-osd-2 /dev/sdd" to zap the device.
It cleared PV, VG and LV for data device, but not DB device.
DB device issue is being discussed in another thread.
Eventually, I restart the active mgr, then the device shows up available.
Not sure what was stuck in mgr.

Thanks!
Tony
________________________________________
From: Marc <Marc@xxxxxxxxxxxxxxxxx>
Sent: February 10, 2021 12:21 PM
To: Philip Brown; Matt Wilder
Cc: ceph-users
Subject:  Re: Device is not available after zap

I had something similar a while ago, can't remember how I solved it sorry, but it is not a lvm bug. Also posted it here. To bad this is still not fixed.

> -----Original Message-----
> Cc: ceph-users <ceph-users@xxxxxxx>
> Subject:  Re: Device is not available after zap
>
> ive always run it against the block dev
>
>
> ----- Original Message -----
> From: "Matt Wilder" <matt.wilder@xxxxxxxxxx>
> To: "Philip Brown" <pbrown@xxxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxx>
> Sent: Wednesday, February 10, 2021 12:06:55 PM
> Subject: Re:  Re: Device is not available after zap
>
> Are you running zap on the lvm volume, or the underlying block device?
>
> If you are running it against the lvm volume, it sounds like you need to
> run it against the block device so it wipes the lvm volumes as well.
> (Disclaimer: I don't run Ceph in this configuration)
>
> On Wed, Feb 10, 2021 at 10:24 AM Philip Brown <pbrown@xxxxxxxxxx> wrote:
>
> > Sorry, not much to say other than a "me too".
> > i spent a week testing ceph configurations.. it should have only been
> 2
> > days. but a huge amount of my time was wasted because I needed to do a
> full
> > reboot on the hardware.
> >
> > on a related note: sometimes "zap" didnt fully clean things up. I had
> to
> > manually go in and clean up vgs. or pvs. or sometimes wipefs -a
> >
> > so, in theory, this could be a linux LVM bug.  but if I recall, i was
> > doing this with ceph octopus, and centos 7.9
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux