OSD-hierachy and crush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
yesterday I expand our 3-Node ceph-cluster with an fourth node
(additional 13 OSDs - all OSDs have the same size (4TB)).

I use the same command like before to add OSDs and change the weight:
ceph osd crush set 44 0.2 pool=default rack=unknownrack host=ceph-04

But ceph osd tree show all OSDs not below unknownrack and the weighting
seems to be different (because with an weight of 0.8 the OSD are almost
full - switched back to 0.6)
root@ceph-04:~# ceph osd tree
# id    weight  type name       up/down reweight
-1      46.8    root default
-3      39              rack unknownrack
-2      13                      host ceph-01
0       1                               osd.0   up      1
1       1                               osd.1   up      1
...
27      1                               osd.27  up      1
28      1                               osd.28  up      1
-4      13                      host ceph-02
10      1                               osd.10  up      1
11      1                               osd.11  up      1
...
32      1                               osd.32  up      1
33      1                               osd.33  up      1
-5      13                      host ceph-03
16      1                               osd.16  up      1
18      1                               osd.18  up      1
...
37      1                               osd.37  up      1
38      1                               osd.38  up      1
-6      7.8             host ceph-04
39      0.6                     osd.39  up      1
40      0.6                     osd.40  up      1
...
50      0.6                     osd.50  up      1
51      0.6                     osd.51  up      1

How can I change ceph-04 to be part of rack unknownrack?
If I change that, would the content of the OSDs from ceph-04 roughly the
same, or move the "whole" content again?

Thanks for feedback!

regards

Udo


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux