Re: ceph 16.2.10 - misplaced object after changing crush map only setting hdd class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

do you know if your crush tree already had the "shadow" tree (probably not)? If there wasn't a shadow-tree ("default~hdd") then the remapping is expected. What exact version did you install this cluster with?

storage01:~ # ceph osd crush tree --show-shadow
ID  CLASS  WEIGHT   TYPE NAME
-2    hdd  0.05699  root default~hdd
-4    hdd  0.05699      host storage01~hdd
 0    hdd  0.01900          osd.0
 1    hdd  0.01900          osd.1
 2    hdd  0.01900          osd.2
-1         0.05800  root default
-3         0.05800      host storage01
 0    hdd  0.01900          osd.0
 1    hdd  0.01900          osd.1
 2    hdd  0.01900          osd.2


Zitat von xadhoom76@xxxxxxxxx:

Hi to all and thanks for sharing your experience on ceph !
We have an easy setup with 9 osd all hdd and 3 nodes, 3 osd for each node.
We started the cluster to test how it works with hdd with default and easy bootstrap . Then we decide to add ssd and create a pool to use only ssd. In order to have pools on hdd and pools on ssd only we edited the crushmap to add class hdd We do not enter anything about ssd till now, nor disk or rules only add the class map to the default rule.
So i show you the rules  before introducing class hdd
# rules
rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}
rule erasure-code {
        id 1
        type erasure
        min_size 3
        max_size 4
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure2_1 {
        id 2
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.meta {
        id 3
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.data {
        id 4
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}

And  here is the after

# rules
rule replicated_rule {
        id 0
	type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step chooseleaf firstn 0 type host
        step emit
}
rule erasure-code {
        id 1
        type erasure
        min_size 3
        max_size 4
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure2_1 {
        id 2
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.meta {
        id 3
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.data {
        id 4
	type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
Just doing this triggered the misplaced of all pgs bind to EC pool.

Is that correct ? and why ?
Best regards
Alessandro Bolgia
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux