Possible without downtime: Configure multi-site, create a new zone for the new pool, let the cluster sync to itself, do a failover to the new zone, delete old zone. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Feb 24, 2020 at 6:14 PM Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx> wrote: > > Hello > > I have ~300TB of data in default.rgw.buckets.data k2m2 pool and I would > like to move it to a new k5m2 pool. > > I found instructions using cache tiering[1], but they come with a vague > scary warning, and it looks like EC-EC may not even be possible [2] (is > it still the case?). > > Can anybody recommend a safe procedure to copy an EC pool's data to > another pool with a more efficient erasure coding? Perhaps there is a > tool out there that could do it? > > A few days of downtime would be tolerable, if it will simplify things. > Also, I have enough free space to temporarily store the k2m2 data in a > replicated pool (if EC-EC tiering is not possible, but EC-replicated and > replicated-EC tiering is possible). > > Is there a tool or some efficient way to verify that the content of two > pools is the same? > > > Thanks, > > Vlad > > [1] https://ceph.io/geen-categorie/ceph-pool-migration/ > [2] > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-February/016109.html > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx