issues with adjusting the crushmap in 0.51

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I've been playing around with 0.51 of ceph on two test machines in
work, I was experimenting with adjusting the crushmap to change from
replicating across osd's to replicating across hosts. When I change
the rule for my data pool from type osd to type host, compile up the
crushmap and then a "ceph osd setcrushmap -i crush.new" it crashes my
monitor if I have one running, if I have two, then one of them crashes
and the process just hangs and leaves my test filesystem in an unclean
state.

I changed the rule data {} to this

rule data {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 0 type host
        step emit
}

Are there any constraints for changing the rules on where things get
replicated? i.e. to go from osd to host to rack with the data and
metadata?

Here's my ceph.conf file and crushmap before the changes

[global]
        #auth supported = cephx
        #keyring = /etc/ceph/ceph.keyring
        filestore xattr use omap = true
  
[osd]
        osd journal size = 1000
        filestore xattr use omap = true

[mon.a]
        host = 134.226.112.194
        mon addr = 134.226.112.194:6789
        mon data = /data/mon.$id
[mon.b]
        host = 134.226.112.138
        mon addr = 134.226.112.138:6789
        mon data = /home/mon.$id

[mds.a]
        host = 134.226.112.194
        mon data = /data/mds.$id

[mds.b]
        host = 134.226.112.138
        mon data = /home/mds.$id

[osd.0]
        host = 134.226.112.194
        osd data = /data/osd.$id
        osd journal = /data/osd.$id.journal

[osd.1]
        host = 134.226.112.194
        osd data = /data$id/osd.$id
        osd journal = /data$id/osd.$id.journal

[osd.2]
        host = 134.226.112.138
        osd data = /home/osd.$id
        osd journal = /home/osd.$id.journal


My crushmap

# begin crush map

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 pool

# buckets
host 134.226.112.194 {
        id -2           # do not change unnecessarily
        # weight 2.000
        alg straw
        hash 0  # rjenkins1
        item osd.1 weight 1.000
        item osd.0 weight 1.000
}
host 134.226.112.138 {
        id -4           # do not change unnecessarily
        # weight 1.000
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 1.000
}
rack rack-1 {
        id -3           # do not change unnecessarily
        # weight 3.000
        alg straw
        hash 0  # rjenkins1
        item 134.226.112.194 weight 2.000
        item 134.226.112.138 weight 1.000
}
pool default {
        id -1           # do not change unnecessarily
        # weight 2.000
        alg straw
        hash 0  # rjenkins1
        item rack-1 weight 2.000
}

# rules
rule data {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 0 type osd
        step emit
}

rule metadata {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 0 type osd
        step emit
}
rule rbd {
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 0 type osd
        step emit
}


Jimmy

-- 
Jimmy Tang
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/

Attachment: pgpd5Pc9Vx8fp.pgp
Description: PGP signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux