Current best practice for migrating from one EC profile to another?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

As we expand our cluster (adding nodes), we'd like to take advantage of
better EC profiles enabled by higher server/rack counts. I understand, as
Ceph currently exists (15.2.4), there is no way to live-migrate from one EC
profile to another on an existing pool, for example, from 4+2 to 17+3 when
going from 7 nodes to 21. Is this correct?

How are people accomplishing migrations such as this one (or 4+2 to 9+3,
for example) with minimal disruption to services that utilize RBDs sitting
on top of these pools? I found:
https://ceph.io/geen-categorie/ceph-pool-migration/ , which requires
effectively shutting down access during migration (which is doable, but not
ideal) - and from what I've read - has some potential downsides
(specifically referencing the cppool method).

The pools are all EC for data, and an rbd pool exists for metadata.

Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux