moving EC pool from HDD to SSD without downtime

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I need to move a 6+2 EC pool from HDDs to SSDs while storage must remain accessible. All SSDs and HDDs are within the same failure domains. The crush rule in question is

rule sr-rbd-data-one {
        id 5
        type erasure
        min_size 3
        max_size 8
        step set_chooseleaf_tries 50
        step set_choose_tries 1000
        step take ServerRoom class hdd
        step chooseleaf indep 0 type host
        step emit
}

and I would be inclined just to change the entry "step take ServerRoom class hdd" to "step take ServerRoom class ssd" and wait for the dust to settle.

However, this will almost certainly lead to all PGs being undersized and inaccessible as all objects are in the wrong place. I noticed that this is not an issue with PGs created by replicated rules as they can contain more OSDs than the replication factor while objects are moved. The same does not apply to EC rules. I suspect this is due to the setting "max_size 8", which does not allow for more than 6+2=8 OSDs being a member of a PG.

What is the correct way to do what I need to do? Can I just set "max_size 16" and go? Will this work with EC rules? If not, what are my options?

Thanks!

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux