Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm trying to migrate a cephfs data pool to a different one in order to reconfigure with new pool parameters. I've found some hints but no specific documentation to migrate pools.

I'm currently trying with rados export + import, but I get errors like these:

Write #-9223372036854775808:00000000:::100001e1007.00000000:head#
omap_set_header failed: (95) Operation not supported

The command I'm using is the following:

 rados export -p cephfs_data | rados import -p cephfs_data_new -

So, I have a few questions:


1) would it work to swap the cephfs data pools by renaming them while the fs cluster is down?

2) how can I copy the old data pool into a new one without errors like the ones above?

3) plain copy from a fs to another one would also work, but I didn't find a way to tell the ceph fuse clients how to mount different filesystems in the same cluster, any documentation on it?

4) even if I found a way to mount via fuse different filesystems belonging to the same cluster, is this feature stable enough or is it still super-experimental?


Thanks,


    Alessandro


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux