Turns out there is no way to undo the deletion: https://github.com/kubernetes/kubernetes/issues/69980 Time to rotate the pool under the folder and just let it do it's thing... On Fri, Sep 25, 2020 at 1:51 PM Brian Topping <brian.topping@xxxxxxxxx> wrote: > Haha I figured out you were on Rook. > > I think you need to add an annotation or label to the CRD. Just create an > empty one and do a kubectl get cephcluster -oyaml to see what it generates > then figure out what the appropriate analog for the restored CRD is. Once > the operator sees the correct info, it will stop trying. > > Sent from my iPhone > > > On Sep 25, 2020, at 09:02, Peter Sarossy <peter.sarossy@xxxxxxxxx> > wrote: > > > > Thanks for the details folks. > > > > Apologies, apparently yesterday definitely was not a day to be operating > > anything for me, as I was meaning to send this to the rook users list > > instead of the ceph users list :( > > > > I will circle back with and answer for posterity once I figure it out. > > > > > > > >> On Fri, Sep 25, 2020 at 3:13 AM Dan van der Ster <dan@xxxxxxxxxxxxxx> > wrote: > >> > >> Hi Peter, > >> > >> I'm not a rook expert, but are you asking how to remove the rook > >> action to delete a pool? Or is the pool already deleted from ceph > >> itself? > >> > >> We "bare" ceph operators have multiple locks to avoid fat fingers like: > >> ceph osd pool set cephfs_data nodelete 1 > >> ceph config set mon mon_allow_pool_delete false # the default > >> > >> -- Dan > >> > >> > >>> On Fri, Sep 25, 2020 at 4:49 AM Peter Sarossy <peter.sarossy@xxxxxxxxx > > > >>> wrote: > >>> > >>> Hit send too early... > >>> > >>> So I did find in the code that it's looking for the deletion timestamp, > >> but > >>> deleting this field in the CRD does not stop the deletion request > either. > >>> The deletionTimestamp reappears after committing the change. > >>> > >> > https://github.com/rook/rook/blob/23108cc94afdebc8f4ab144130a270b1e4ffd94e/pkg/operator/ceph/pool/controller.go#L193 > >>> > >>> On Thu, Sep 24, 2020 at 10:40 PM Peter Sarossy < > peter.sarossy@xxxxxxxxx> > >>> wrote: > >>> > >>>> hey folks, > >>>> > >>>> I have managed to fat finger a config apply command and accidentally > >>>> deleted the CRD for one of my pools. The operator went ahead and tried > >> to > >>>> purge it, but fortunately since it's used by CephFS it was unable to. > >>>> > >>>> Redeploying the exact same CRD does not make the operator stop trying > >> to > >>>> delete it though. > >>>> > >>>> Any hints on how to make the operator forget about the deletion > request > >>>> and leave it be? > >>>> > >>>> -- > >>>> Cheers, > >>>> Peter Sarossy > >>>> Technical Program Manager > >>>> Data Center Data Security - Google LLC. > >>>> > >>> > >>> > >>> -- > >>> Cheers, > >>> Peter Sarossy > >>> Technical Program Manager > >>> Data Center Data Security - Google LLC. > >>> _______________________________________________ > >>> ceph-users mailing list -- ceph-users@xxxxxxx > >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx > >> > > > > > > -- > > Cheers, > > Peter Sarossy > > Technical Program Manager > > Data Center Data Security - Google LLC. > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- Cheers, Peter Sarossy Technical Program Manager Data Center Data Security - Google LLC. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx