Re: zap an osd and it appears again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We currently remove drives without --zap if we do not want them to be
automatically re-added. After full removal from the cluster or on addition
of new drives we set `ceph orch pause` to do be able to work on the drives
without ceph interfering. To add the drives we resume the background
orchestrator task using `ceph orch resume`.

Thanks,
David


On Tue, Apr 26, 2022, 10:28 Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:

>
>
> > Was the osd spec responsible for creating this osd set to unmanaged?
> Having
> > it re-pickup available disks is the expected behavior right now (see
> > https://docs.ceph.com/en/latest/cephadm/services/osd/#declarative-state)
> > although we've been considering changing this as it seems like in the
> > majority of cases users want to only pick up the disks available at apply
> > time and not every matching disk forever.
>
> I would vote to change the default.
>
> * Local hands may pull / insert the wrong drive in the wrong place
> * New / replacement drives may have issues; I like to do a sanity check
> before deploying an OSD
> * Drives used for boot volume mirrors
> * etc
>
>
> > But if you have set the service
> > to unmanaged and it's still picking up the disks that's a whole different
> > issue entirely.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux