Minimal downtime when changing Erasure Code plugin on Ceph RGW

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I would like to perform a proof of concept on changing a running Ceph cluster which is using non-default erasure code plugin back to jerasure plugin on its default.rgw.buckets.data pool. I could not find a documentation on how to achieve this with minimal downtime. I know changing erasure code plugin and profile would require us to migrate the data inside the old pool to the new pool.

Is there a better way to change the erasure code plugin with minimal downtime as possible for Ceph RGW with hundred millions of objects?

Since `rados cppool` is deprecated, I'm considering in using cache tier method as explained at https://ceph.com/geen-categorie/ceph-pool-migration/. Is cache tier method feasible?


Kind regards,

Charles Alva
Sent from Gmail Mobile
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux