On 8/28/22 08:57, farhad kh wrote:
i removed osd from crushmap but it still in 'ceph osd tree' [root@ceph2-node-01 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 20.03859 root default -20 20.03859 datacenter dc-1 -21 20.03859 room server-room-1 -22 10.00000 rack rack-1 -3 1.00000 host ceph2-node-01 -23 10.00000 rack rack-2 -5 1.00000 host ceph2-node-02 -24 10.00000 rack rack-3 -7 1.00000 host ceph2-node-03 1 0 osd.1 down 0 1.00000 9 0 osd.9 down 1.00000 1.00000 12 0 osd.12 down 1.00000 1.00000 but in crush tree is not [root@ceph2-node-01 ~]# ceph osd crush tree ID CLASS WEIGHT TYPE NAME -1 20.03859 root default -20 20.03859 datacenter dc-1 -21 20.03859 room server-room-1 -22 10.00000 rack rack-1 -3 1.00000 host ceph2-node-01 -23 10.00000 rack rack-2 -5 1.00000 host ceph2-node-02 -24 10.00000 rack rack-3 -7 1.00000 host ceph2-node-03 how can i resolve this ?
Have you followed this procedure: https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/?
If it is crush removed it's possible that the ceph osd purge command won't work anymore. But you can follow these steps:
- stop osd if running - ceph osd auth del osd.$osd-id - ceph osd rm $osd-id Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx