Re: Orphaned entries in Crush map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the output of `ceph osd stat`?  My guess is that they are still considered to be part of the cluster and going through the process of removing OSDs from your cluster is what you need to do.  In particular `ceph osd rm 19`.

On Fri, Feb 16, 2018 at 2:31 PM Karsten Becker <karsten.becker@xxxxxxxxxxx> wrote:
Hi.

during the reorgainzation of my cluster I removed some OSDs. Obviously
something went wrong for 2 of them, osd.19 and osd.20.

If I get my current Crush map, decompile and edit them, I see 2
orphaned/stale entries for the former OSDs:

> device 16 osd.16 class hdd
> device 17 osd.17 class hdd
> device 18 osd.18 class hdd
> device 19 device19
> device 20 device20
> device 21 osd.21 class hdd
> device 22 osd.22 class hdd
> device 23 osd.23 class hdd

If I delete them from the Crush map (file), recompile it and set it
productive - they appear again... if I get the current map again and
decompile them, they are in again.

So how to get rid of these entries?

Best from Berlin/Germany
Karsten

Ecologic Institut gemeinnuetzige GmbH
Pfalzburger Str. 43/44, D-10717 Berlin
Geschaeftsfuehrerin / Director: Dr. Camilla Bausch
Sitz der Gesellschaft / Registered Office: Berlin (Germany)
Registergericht / Court of Registration: Amtsgericht Berlin (Charlottenburg), HRB 57947
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux