Migrating EC pool to device-class crush rules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Like many, we have a typical double root crush map, for hdd vs ssd-based pools. We've been running lumious for some time, so in preparation for a migration to new storage hardware, I wanted to migrate our pools to use the new device-class based rules; this way I shouldn't need to perpetuate the double hdd/ssd crush map for new hardware...

I understand how to migrate our replicated pools, by creating new replicated crush rules, and migrating them one at a time, but I'm confused on how to do this for erasure pools.

I can create a new class-aware EC profile something like:

ceph osd erasure-code-profile set ecprofile42_hdd k=4 m=2 crush-device-class=hdd crush-failure-domain=host

then a new crush rule from this:

ceph osd crush rule create-erasure ec42_hdd ecprofile42_hdd

So mostly I want to confirm that is is safe to change the crush rule for the EC pool. It seems to make sense, but then, as I understand it, you can't change the erasure code profile for a pool after creation; but this seems to implicitly do so...

old rule:
rule .rgw.buckets.ec42 {
        id 17
        type erasure
        min_size 3
        max_size 20
        step set_chooseleaf_tries 5
        step take platter
        step chooseleaf indep 0 type host
        step emit
}

old ec profile:
# ceph osd erasure-code-profile get ecprofile42
crush-failure-domain=host
directory=/usr/lib/x86_64-linux-gnu/ceph/erasure-code
k=4
m=2
plugin=jerasure
technique=reed_sol_van

new rule:
rule ec42_hdd {
        id 7
        type erasure
        min_size 3
        max_size 6
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}

new ec profile:
# ceph osd erasure-code-profile get ecprofile42_hdd
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8

These are both ec42 but I'm not sure why the old rule has "max size 20" (perhaps because it was generated a long time ago under hammer?).

Thanks for any feedback,

Graham
--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux