Are you talking about adding the new data pool to the current
filesystem? Like:
$ ceph fs add_data_pool my_ceph_fs new_ec_pool
I have done that, and now the filesystem shows up as having two data pools:
$ ceph fs ls
name: my_ceph_fs, metadata pool: cephfs_meta, data pools:
[cephfs_data new_ec_pool ]
but then I run into two issues:
1. How do I actually copy/move/migrate the data from the old pool to the
new pool?
2. When I'm done moving the data, how do I get rid of the old data pool?
I know there's a rm_data_pool option, but I have read on the mailing
list that you can't remove the original data pool from a cephfs filesystem.
The other option is to create a whole new cephfs with a new metadata
pool and the new data pool, but creating multiple filesystems is still
experimental and not allowed by default...
On 6/28/19 8:28 AM, Marc Roos wrote:
What about adding the new data pool, mounting it and then moving the
files? (read copy because move between data pools does not what you
expect it do)
-----Original Message-----
From: Jorge Garcia [mailto:jgarcia@xxxxxxxxxxxx]
Sent: vrijdag 28 juni 2019 17:26
To: ceph-users
Subject: *****SPAM***** Migrating a cephfs data pool
This seems to be an issue that gets brought up repeatedly, but I haven't
seen a definitive answer yet. So, at the risk of repeating a question
that has already been asked:
How do you migrate a cephfs data pool to a new data pool? The obvious
case would be somebody that has set up a replicated pool for their
cephfs data and then wants to convert it to an erasure code pool. Is
there a simple way to do this, other than creating a whole new ceph
cluster and copying the data using rsync?
Thanks for any clues
Jorge
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com