> I have Proxmox cluster with 7 node. > > Storage for VM disk and others pool data is on ceph version 15.2.15 > (4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable) > > On pve-7 I have 10 OSD and for test I want to remove 2 osd from this node. > > I write some steps command how I remove this OSD from pve-7 cluster and > ceph storage. > > 1) root@pve-7 ~ # ceph osd tree > > 2) ceph osd reweight > 2.1) ceph osd reweight osd.${ID} 0.98 > 2.2) ceph osd reweight osd.${ID} 0.0 can I set the 0.0 to clean osd.X > before remove it? > > 3) When osd is clean from data can I? > ceph osd down osd.${ID} > > 4) Remove the osd from cluster > ceph osd out osd.${ID} > > 5) Stop the OSD and umount the osd > systemctl stop ceph-osd@${ID} > umount /var/lib/ceph/osd/ceph-${ID} > > 6) Remove the osd from CRUSH map: > ceph osd crush remove osd.${ID} > > 7) Remove the user of OSD: > ceph auth del osd.${ID} > > 8) And now full delete the OSD: > ceph osd rm osd.${ID} > > 9) Last command > > ceph osd tree (removed OSD not showing > > Can some one suggest me my steps command is correct ? > > Or I need change some steps? Step 6,7,8 looks a lot like "ceph osd purge", so unless you have a very old installation, replace them with one command. Apart from that it looks ok. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx