Re: moving EC pool from HDD to SSD without downtime

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 30, 2019 at 7:42 PM Frank Schilder <frans@xxxxxx> wrote:
>

> and I would be inclined just to change the entry "step take ServerRoom class hdd" to "step take ServerRoom class ssd" and wait for the dust to settle.

yes


> However, this will almost certainly lead to all PGs being undersized and inaccessible as all objects are in the wrong place.

no

> I noticed that this is not an issue with PGs created by replicated rules as they can contain more OSDs than the replication factor while objects are moved. The same does not apply to EC rules. I suspect this is due to the setting "max_size 8", which does not allow for more than 6+2=8 OSDs being a member of a PG.

no


> What is the correct way to do what I need to do? Can I just set "max_size 16" and go? Will this work with EC rules? If not, what are my options?

just change it, it won't be pretty and might have an impact on
performance but it will stay available

>
> Thanks!


-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux