Re: zap an osd and it appears again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





> Was the osd spec responsible for creating this osd set to unmanaged? Having
> it re-pickup available disks is the expected behavior right now (see
> https://docs.ceph.com/en/latest/cephadm/services/osd/#declarative-state)
> although we've been considering changing this as it seems like in the
> majority of cases users want to only pick up the disks available at apply
> time and not every matching disk forever.
> But if you have set the service
> to unmanaged and it's still picking up the disks that's a whole different
> issue entirely.

This is not the issue, unmanaged properly works, no new disk is added. But we would like to avoid to change the disk specification to change the value of unmanaged every time we need to do some operational stuff.

> We currently remove drives without --zap if we do not want them to be
> automatically re-added. After full removal from the cluster or on addition
> of new drives we set `ceph orch pause` to do be able to work on the drives
> without ceph interfering. To add the drives we resume the background
> orchestrator task using `ceph orch resume`.

I guess this could do the trick, but it makes operational stuff a bit more heavy.

> I would vote to change the default.
> * Local hands may pull / insert the wrong drive in the wrong place
> * New / replacement drives may have issues; I like to do a sanity check before deploying an OSD
> * Drives used for boot volume mirrors
> * etc

I agree here. I think having a command such "cpeh orch osd populate-new-disks" to explicitly deploy new OSDs without the need to change every time the state of the orch or the osd specification whould be handy.

Luis Domingues
Proton AG


------- Original Message -------
On Tuesday, April 26th, 2022 at 19:28, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:


> > Was the osd spec responsible for creating this osd set to unmanaged? Having
> > it re-pickup available disks is the expected behavior right now (see
> > https://docs.ceph.com/en/latest/cephadm/services/osd/#declarative-state)
> > although we've been considering changing this as it seems like in the
> > majority of cases users want to only pick up the disks available at apply
> > time and not every matching disk forever.
>
>
> I would vote to change the default.
>
> * Local hands may pull / insert the wrong drive in the wrong place
> * New / replacement drives may have issues; I like to do a sanity check before deploying an OSD
> * Drives used for boot volume mirrors
> * etc
>
> > But if you have set the service
> > to unmanaged and it's still picking up the disks that's a whole different
> > issue entirely.
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux