Migrating from pre-luminous multi-root crush hierachy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

when we started with ceph we wanted to mix different disk-types per host. Since that was before device-classes were available we followed the advice to create a multi root-hierachy and disk-type-specific hosts.

 

So currently the osd tree looks kind of like this

 

-8          218.21320 root capacity-root

-7           36.36887     host ceph-dc-1-01-osd-01-sata-ssd

  2 capacity   3.63689         osd.2                             up  1.00000 1.00000

  9 capacity   3.63689         osd.9                             up  1.00000 1.00000

10 capacity   3.63689         osd.10                            up  1.00000 1.00000

15 capacity   3.63689         osd.15                            up  1.00000 1.00000

20 capacity   3.63689         osd.20                            up  1.00000 1.00000

24 capacity   3.63689         osd.24                            up  1.00000 1.00000

30 capacity   3.63689         osd.30                            up  1.00000 1.00000

33 capacity   3.63689         osd.33                            up  1.00000 1.00000

41 capacity   3.63689         osd.41                            up  1.00000 1.00000

46 capacity   3.63689         osd.46                            up  1.00000 1.00000

-9           36.36887     host ceph-dc-1-01-osd-02-sata-ssd

  0 capacity   3.63689         osd.0                             up  1.00000 1.00000

  1 capacity   3.63689         osd.1                             up  1.00000 1.00000

  5 capacity   3.63689         osd.5                             up  1.00000 1.00000

  7 capacity   3.63689         osd.7                             up  1.00000 1.00000

12 capacity   3.63689         osd.12                            up  1.00000 1.00000

13 capacity   3.63689         osd.13                            up  1.00000 1.00000

17 capacity   3.63689         osd.17                            up  1.00000 1.00000

18 capacity   3.63689         osd.18                            up  1.00000 1.00000

21 capacity   3.63689         osd.21                            up  1.00000 1.00000

23 capacity   3.63689         osd.23                            up  1.00000 1.00000

……

-73          10.46027 root ssd-root

-46            3.48676             host ceph-dc-1-01-osd-01-ssd

38      ssd   0.87169                 osd.38                    up  1.00000 1.00000

42      ssd   0.87169                 osd.42                    up  1.00000 1.00000

47      ssd   0.87169                 osd.47                    up  1.00000 1.00000

61      ssd   0.87169                 osd.61                    up  1.00000 1.00000

-52            3.48676             host ceph-dc-1-01-osd-02-ssd

40      ssd   0.87169                 osd.40                    up  1.00000 1.00000

43      ssd   0.87169                 osd.43                    up  1.00000 1.00000

45      ssd   0.87169                 osd.45                    up  1.00000 1.00000

49      ssd   0.87169                 osd.49                    up  1.00000 1.00000

 

We recently upgrade to luminous (you can see the device-classes in the output). So it should be possible to have one single root, no fake hosts and just use the device-class.

We added some hosts/osds recently which back a new pools, so we also created a new hierarchy and crush rules for those. That worked perfect, and of course we want to have that for the old parts of the cluster, too

 

Is it possible to move the existing osd’s to a new root/bucket without having to move all the data around (which might be difficult cause we don’t have enough capacity to move 50 % of the osd’s ) ?

 

I imagine something like:

 

1.     Magic maintenance command

2.     Move osds to new bucket in hierarchy

3.     Update either existing crush-rule or create new rule an update pool

4.     Magic maintenance-done command

 

We also plan to migrate the ods to bluestore. Should we do this

a) before moving

b) after moving

 

I hope our issue is clear.

 

Best regards

Carsten

 

 

 

 

20y_witcom


WiTCOM SERVER HOUSING mit Ökostrom - WiTCOM bietet seinen Kunden im DC1 eine komplette Versorgung mit ESWE Naturstrom.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux