On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo <Alessandro.DeSalvo@xxxxxxxxxxxxx> wrote: > > Hi, > > I'm trying to migrate a cephfs data pool to a different one in order to > reconfigure with new pool parameters. I've found some hints but no > specific documentation to migrate pools. > > I'm currently trying with rados export + import, but I get errors like > these: > > Write #-9223372036854775808:00000000:::100001e1007.00000000:head# > omap_set_header failed: (95) Operation not supported > > The command I'm using is the following: > > rados export -p cephfs_data | rados import -p cephfs_data_new - > > So, I have a few questions: > > > 1) would it work to swap the cephfs data pools by renaming them while > the fs cluster is down? > > 2) how can I copy the old data pool into a new one without errors like > the ones above? > This won't work as you expected. some cephfs metadata records ID of data pool. > 3) plain copy from a fs to another one would also work, but I didn't > find a way to tell the ceph fuse clients how to mount different > filesystems in the same cluster, any documentation on it? > ceph-fuse /mnt/ceph --client_mds_namespace=cephfs_name > 4) even if I found a way to mount via fuse different filesystems > belonging to the same cluster, is this feature stable enough or is it > still super-experimental? > very stable > > Thanks, > > > Alessandro > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com