-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I usually do the crush rm step second to last. I don't know if your modifying the osd after removing it from the CRUSH is putting it back in. 1. Stop OSD process 2. ceph osd rm 3. ceph osd crush rm osd. 4. ceph auth del osd. Can you try the crush rm command again for kicks and giggles? -----BEGIN PGP SIGNATURE----- Version: Mailvelope v0.13.1 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJVu6u+CRDmVDuy+mK58QAAe8sP/2FCIN7Aufifp0BA8vGu k9qJiCxwq59t/ucTWmb1iJo0wxyWtElImIs72b+f7bcZfuds2IU9jPys0AJJ 83AairDCcmTD8f8X+IuFF3jG2L3pt1SBB2I1fpxjvaDHCjZVsB8EHFixjadM DtxY0UDocU8gfVFNA2OWguqvu1tphsZ6p2muZehZZ7AZIvFyi8Ls7IZD5kGf wmXL3Omv0q/b9Es8NXXk3OwwThxp5lYLz2RkNoe6ThXd4R65uaNL/iZt9RvD Xtsjgik9sT9L/jXieY6kPG0IumuiYivJkswy1SnWyPeRPF/yTTzSAKC2cx6D KMfBNwqxYIx5BymVFu7k38clY64U9uIhqbaW7VujvQ/Bs0/1ERv1mltoajZb 1fS8s75xpWPf5W2B80rg361ukExzH5y+X+fZvVjbcKDBE8GECN9T0oy3YNM0 C7S/YRkJr5yr0/scaL7Z5nrq2/MLgJHF2bK1y25SGDkdjm5d2YpF0LMeT8Gp MIpKDA0LJnznEs5YkIa7u6NkWhQ3netiNJkC8XOlr5NYrBfDQlVrkDtiJPHl GGoIk/vPuDWNp2x0g2rAbRLS61zSi2Oo1D6PNFa6cFU9/QW8cGWZ8zGDOf+C GepwY8UHA0uDJv31IOWvsTABPvI7D1I3rimkBZU72QYbrrS8/uu/hZEQwF5k Ltce =hyW9 -----END PGP SIGNATURE----- ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Jul 31, 2015 at 1:15 AM, Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx> wrote: > Hi, > > I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20) > host-3 & (osd.22) host-6. > > user@host-1:~$ sudo ceph osd tree > ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY > -1 184.67990 root default > -7 82.07996 chassis chassis2 > -4 41.03998 host host-3 > 8 6.84000 osd.8 up 1.00000 1.00000 > 9 6.84000 osd.9 up 1.00000 1.00000 > 10 6.84000 osd.10 up 1.00000 1.00000 > 11 6.84000 osd.11 up 1.00000 1.00000 > 20 6.84000 osd.20 up 1.00000 1.00000 > 21 6.84000 osd.21 up 1.00000 1.00000 > -5 41.03998 host host-6 > 12 6.84000 osd.12 up 1.00000 1.00000 > 13 6.84000 osd.13 up 1.00000 1.00000 > 14 6.84000 osd.14 up 1.00000 1.00000 > 15 6.84000 osd.15 up 1.00000 1.00000 > 22 6.84000 osd.22 up 1.00000 1.00000 > 23 6.84000 osd.23 up 1.00000 1.00000 > -6 102.59995 chassis chassis1 > -2 47.87997 host host-1 > 0 6.84000 osd.0 up 1.00000 1.00000 > 1 6.84000 osd.1 up 1.00000 1.00000 > 2 6.84000 osd.2 up 1.00000 1.00000 > 3 6.84000 osd.3 up 1.00000 1.00000 > 16 6.84000 osd.16 up 1.00000 1.00000 > 17 6.84000 osd.17 up 1.00000 1.00000 > 24 6.84000 osd.24 up 1.00000 1.00000 > -3 54.71997 host host-2 > 4 6.84000 osd.4 up 1.00000 1.00000 > 5 6.84000 osd.5 up 1.00000 1.00000 > 6 6.84000 osd.6 up 1.00000 1.00000 > 7 6.84000 osd.7 up 1.00000 1.00000 > 18 6.84000 osd.18 up 1.00000 1.00000 > 19 6.84000 osd.19 up 1.00000 1.00000 > 25 6.84000 osd.25 up 1.00000 1.00000 > 26 6.84000 osd.26 up 1.00000 1.00000 > user@host-1:~$ > > Steps used to remove OSD: > user@host-1:~$ ceph auth del osd.20; ceph osd crush rm osd.20; ceph > osd down osd.20; ceph osd rm osd.20 > updated > removed item id 20 name 'osd.20' from crush map > marked down osd.22. > removed osd.22 > > Removed both of OSD's osd.20 & osd.22 > > But, even after removing them, ceph osd tree is listing deleted OSD's > & ceph -s reporting total number of OSD's as 27. > > user@host-1:~$ sudo ceph osd tree > ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY > -1 184.67990 root default > -7 82.07996 chassis chassis2 > -4 41.03998 host host-3 > 8 6.84000 osd.8 up 1.00000 1.00000 > 9 6.84000 osd.9 up 1.00000 1.00000 > 10 6.84000 osd.10 up 1.00000 1.00000 > 11 6.84000 osd.11 up 1.00000 1.00000 > 21 6.84000 osd.21 up 1.00000 1.00000 > -5 41.03998 host host-6 > 12 6.84000 osd.12 up 1.00000 1.00000 > 13 6.84000 osd.13 up 1.00000 1.00000 > 14 6.84000 osd.14 up 1.00000 1.00000 > 15 6.84000 osd.15 up 1.00000 1.00000 > 23 6.84000 osd.23 up 1.00000 1.00000 > -6 102.59995 chassis chassis1 > -2 47.87997 host host-1 > 0 6.84000 osd.0 up 1.00000 1.00000 > 1 6.84000 osd.1 up 1.00000 1.00000 > 2 6.84000 osd.2 up 1.00000 1.00000 > 3 6.84000 osd.3 up 1.00000 1.00000 > 16 6.84000 osd.16 up 1.00000 1.00000 > 17 6.84000 osd.17 up 1.00000 1.00000 > 24 6.84000 osd.24 up 1.00000 1.00000 > -3 54.71997 host host-2 > 4 6.84000 osd.4 up 1.00000 1.00000 > 5 6.84000 osd.5 up 1.00000 1.00000 > 6 6.84000 osd.6 up 1.00000 1.00000 > 7 6.84000 osd.7 up 1.00000 1.00000 > 18 6.84000 osd.18 up 1.00000 1.00000 > 19 6.84000 osd.19 up 1.00000 1.00000 > 25 6.84000 osd.25 up 1.00000 1.00000 > 26 6.84000 osd.26 up 1.00000 1.00000 > 20 0 osd.20 up 0 1.00000 > 22 0 osd.22 up 0 1.00000 > user@host-1:~$ > > Please let me know how to remove the OSD in this case. > > -Thanks & Regards, > Mallikarjun Biradar > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com