Crush Maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a test cluster that is up and running.  It consists of three mons, and three OSD servers, with each OSD server having eight OSD’s and two SSD’s for journals.  I’d like to move from the flat crushmap to a crushmap with typical depth using most of the predefined types.  I have the current crushmap decompiled and have edited it to add the additional depth of failure zones.

 

Questions:

 

1)      Do the ID’s of the bucket types need to be consecutive, or can I make them up as long as they are negative in value and unique?

2)      Is there any way that I can control the assignment of the bucket type ID’s if I were to update the crushmap on a running system using the CLI?

3)      Is there any harm in adding bucket types that are not currently used, but assigning them a weight of 0, so they aren’t used (a row defined, with racks, but the racks have no hosts defined)?

4)      Can I have a bucket type with no “item” lines in it, or does each bucket type need at least on item declaration to be valid?

 

Example:

# begin crush map

 

# devices

device 0 osd.0

device 1 osd.1

device 2 osd.2

device 3 osd.3

device 4 osd.4

device 5 osd.5

device 6 osd.6

device 7 osd.7

device 8 osd.8

device 9 osd.9

device 10 osd.10

device 11 osd.11

device 12 osd.12

device 13 osd.13

device 14 osd.14

device 15 osd.15

device 16 osd.16

device 17 osd.17

device 18 osd.18

device 19 osd.19

device 20 osd.20

device 21 osd.21

device 22 osd.22

device 23 osd.23

 

# types

type 0 osd

type 1 host

type 2 rack

type 3 row

type 4 room

type 5 datacenter

type 6 root

 

# buckets

host spucosds01 {

        id -2           # do not change unnecessarily

        # weight 29.120

        alg straw

        hash 0  # rjenkins1

        item osd.0 weight 3.640

        item osd.1 weight 3.640

        item osd.2 weight 3.640

        item osd.3 weight 3.640

        item osd.4 weight 3.640

        item osd.5 weight 3.640

        item osd.6 weight 3.640

        item osd.7 weight 3.640

}

host spucosds02 {

        id -3           # do not change unnecessarily

        # weight 29.120

        alg straw

        hash 0  # rjenkins1

        item osd.8 weight 3.640

        item osd.9 weight 3.640

        item osd.10 weight 3.640

        item osd.11 weight 3.640

        item osd.12 weight 3.640

        item osd.13 weight 3.640

        item osd.14 weight 3.640

        item osd.15 weight 3.640

}

host spucosds03 {

        id -4           # do not change unnecessarily

        # weight 29.120

        alg straw

        hash 0  # rjenkins1

        item osd.16 weight 3.640

        item osd.17 weight 3.640

        item osd.18 weight 3.640

        item osd.19 weight 3.640

        item osd.20 weight 3.640

        item osd.21 weight 3.640

        item osd.22 weight 3.640

        item osd.23 weight 3.640

}

rack rack2-2 {

        id -220

        alg straw

        hash 0

        item spucosds01 weight 29.12

}

rack rack3-2 {

        id -230

        alg straw

        hash 0

        item spucosds02 weight 29.12

}

rack rack4-2 {

        id -240

        alg straw

        hash 0

        item spucosds03 weight 29.12

}

row row1 {

        id -100

        alg straw

        hash 0

}

row row2 {

        id -200

        alg straw

        hash 0

        item rack2-2 weight 29.12

        item rack3-2 weight 29.12

        item rack4-2 weight 29.12

}

datacenter smt {

        id -1000

        alg straw

        hash 0

        item row1 weight 0.0

        item row2 weight 87.36

}

root default {

        id -1           # do not change unnecessarily

        # weight 87.360

        alg straw

        hash 0  # rjenkins1

        item smt weight 87.36

}

 

# rules

rule data {

        ruleset 0

        type replicated

        min_size 1

        max_size 10

        step take default

        step chooseleaf firstn 0 type host

        step emit

}

rule metadata {

        ruleset 1

        type replicated

        min_size 1

        max_size 10

        step take default

        step chooseleaf firstn 0 type host

        step emit

}

rule rbd {

        ruleset 2

        type replicated

        min_size 1

        max_size 10

        step take default

        step chooseleaf firstn 0 type host

        step emit

}

 

# end crush map

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux