Re: Migrating to new pools (RBD, CephFS)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


If the problem is not severe and you can wait, then according to this:

http://ceph.com/community/new-luminous-pg-overdose-protection/

there is a pg merge feature coming.


Regards,

Denes.


On 12/18/2017 02:18 PM, Jens-U. Mozdzen wrote:
Hi *,

facing the problem to reduce the number of PGs for a pool, I've found various information and suggestions, but no "definite guide" to handle pool migration with Ceph 12.2.x. This seems to be a fairly common problem when having to deal with "teen-age clusters", so consolidated information would be a real help. I'm willing to start writing things up, but don't want to duplicate information. So:

Are there any documented "operational procedures" on how to migrate

- an RBD pool (with snapshots created by Openstack)

- a CephFS data pool

- a CephFS metadata pool

to a different volume, in order to be able to utilize pool settings that cannot be changed on an existing pool?

---

RBD pools: From what I've read, RBD snapshots are "broken" after using "rados cppool" to move the content of an "RBD pool" to a new pool.

---

CephFS data pool: I know I can add additional pools to a CephFS instance ("ceph fs add_data_pool"), and have newly created files to be placed in the new pool ("file layouts"). But according to the docs, a small amount of metadata is kept in the primary data pool for all files, so I cannot remove the original pool.

I couldn't identify how CephFS (MDS) identifies it's current data pool (or "default data pool" in case of multiple pools - the one named in "ceph fs new"), so "rados cppool"-moving the data to a new pool and then reconfiguring CephFS to use the new pool (while MDS are stopped, of course) is not yet an option? And there might be references to the pool id hiding in CephFS metadata, too, invalidating this approach altogether.

Of course, dumping the current content of the CephFS to external storage and recreating the CephFS instance with new pools is a potential option, but may required a substantial amount of extra storage ;)

---

CephFS metadata pool: I've not seen any indication of a procedure to swap metadata pools.


I couldn't identify how CephFS (MDS) identifies it's current metadata pool, so "rados cppool"-moving the metadata to a new pool and then reconfiguring CephFS to use the new pool (while MDS are stopped, of course) is not yet an option?

Of course, dumping the current content of the CephFS to external storage and recreating the CephFS instance with new pools is a potential option, but may required a substantial amount of extra storage ;)

---

http://cephnotes.ksperis.com/blog/2015/04/15/ceph-pool-migration describes an interesting approach to migrate all pool contents by making the current pool a cache tier to the new pool and then migrate the "cache tier content" to the (new) base pool. But I'm not yet able to judge the approach and will have to conduct tests. Can anyone already make an educated guess if especially the "snapshot" problem for RBD pools will be circumvented this way and how CephFS will react to this approach? This "cache tier" approach, if feasible, would be a nice way to circumvent downtime and extra space requirements.

Thank you for any ideas, insight and experience you can share!

Regards,
J

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux