Re: Converting/Migrating EC pool to a replicated pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

instead of exporting/importing single objects via rados export/import I would use 'rados cppool <pool-name> <dest-pool>' although it does a linear copy of each object, so I'm not sure if that's so much better... So first create a new replicated pool, 'rados cppool old new', then rename the original pool, and then rename the new pool to the original name. Do you know if the remaining header objects change a lot or are read very frequently? This could cause some interruption as well, not sure if there's a smoother way, though. You could monitor (or maybe you already do) the pool stats for that pool:

watch ceph osd pool stats <pool>

The cache tier approach is not really applicable here, the cache tier can't be an ec pool:

ceph osd tier add test-cold-pool test-ec --force-nonempty
Error ENOTSUP: tier pool 'test-ec' is an ec pool, which cannot be a tier

Maybe someone else has some more experience to share.

Regards,
Eugen

Zitat von Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>:

Hi,


we recently expanded a ceph cluster with a second site and want to distribute data accordingly. Two replicate should be present on one site, one replicate on the other site.

This works well for replicated pools, but not for EC pool, e.g. the RGW data pool. I've added a second (replicated) storage class to RGW and moved most of the data to it via lifecycle operations. Now about 670 GB are left in the EC pool, mostly the head objects which cannot be moved.

I'm looking for the best method to migrate the existing EC pool to a replicated pool, keeping the downtime a small as possible. So far two methods seem to be feasible:


1. Cache tiering

Setup cache tiering with the EC pool as frontend pool and a replicated target pool as backend pool. Set cache mode to proxy and evict all objects from the EC pool the replicated pool. This will reduce the downtime to a minimum (stopping all RGW instances, changes the pool name in the placement definition, restart instance). But I'm not sure whether a EC pool can acts as frontend pool at all. And cache tiering is deprecated, so there might be other downfalls.


2. Export / Import

Stop all RGW instances, export the EC pool's content with "rados export", and import it into the target pool. This should be a safe method, but will require a significant longer downtime (probably 24+ hours).


Are there any other options? I also evaluated working bucket wise by creating a second placement set, disable access to a bucket, move all objects belonging to that bucket from one pool to the other (identifiable via the bucket's marker), change the placement setting of the bucket and reenable access. Unfortunately I've didn't found a good method to temporarily disable bucket access.


Best regards,

Burkhard Linke

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux