Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'm currently trying with rados export + import, but I get errors like
these:
Write #-9223372036854775808:00000000:::100001e1007.00000000:head#
omap_set_header failed: (95) Operation not supported
The command I'm using is the following:
rados export -p cephfs_data | rados import -p cephfs_data_new -
So, I have a few questions:
1) would it work to swap the cephfs data pools by renaming them while
the fs cluster is down?
2) how can I copy the old data pool into a new one without errors like
the ones above?
3) plain copy from a fs to another one would also work, but I didn't
find a way to tell the ceph fuse clients how to mount different
filesystems in the same cluster, any documentation on it?
4) even if I found a way to mount via fuse different filesystems
belonging to the same cluster, is this feature stable enough or is it
still super-experimental?
Thanks,
Alessandro
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com