Hi,
I'm not sure if the force flag will help here, but you could try (you
should probably cancel the current operation first with 'ceph orch osd
rm stop {ID}' and retry with force). When I had a similar situation
last time, I think I just went ahead and purged the OSDs myself to let
the orchestrator exit the removal queue (ceph osd purge {ID}). Maybe I
was a bit impatient, but I didn't know how long I might have had to
wait, so I intervened. ;-)
You can also stop the rm operation and then purge those OSDs with a
for loop (since the OSD IDs are consecutive, it's an easy loop ;-) ).
Regards,
Eugen
Zitat von Torkil Svensgaard <torkil@xxxxxxxx>:
Hi
18.2.4
We had some hard drives going AWOL due to a failing SAS expander so
I initiated "ceph orch host drain host". After a couple days I'm now
looking at this:
"
OSD HOST STATE PGS REPLACE FORCE ZAP
DRAIN STARTED AT
528 gimpy done, waiting for purge 0 False False False
529 gimpy done, waiting for purge 0 False False False
530 gimpy done, waiting for purge 0 False False False
531 gimpy done, waiting for purge 0 False False False
532 gimpy done, waiting for purge 0 False False False
533 gimpy done, waiting for purge 0 False False False
534 gimpy done, waiting for purge 0 False False False
535 gimpy done, waiting for purge 0 False False False
536 gimpy done, waiting for purge 0 False False False
537 gimpy done, waiting for purge 0 False False False
538 gimpy done, waiting for purge 0 False False False
539 gimpy done, waiting for purge 0 False False False
"
It removed the drives that was still working just fine but the
missing drives seem stuck like this. How do I get these to finish?
Force rm?
Mvh.
Torkil
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx