Re: pool migration for cephfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oops, forgot a step - need to tell the MDS about the new pool before step 2:

`ceph mds add_data_pool <name>`

You may also need to mark the pool as used by cephfs:

`ceph osd pool application enable {pool-name} cephfs`

On Wed, May 15, 2019 at 3:15 PM Elise Burke <elise.null@xxxxxxxxx> wrote:
I came across that and tried it - the short answer is no, you can't do that - using cache tier. The longer answer as to why I'm less sure about, but iirc it has to do with copying / editing the OMAP object properties.

The good news, however, is that you can 'fake it' using File Layouts - http://docs.ceph.com/docs/mimic/cephfs/file-layouts/

In my case I was moving around / upgrading disks and wanted to change from unreplicated (well, r=1) to erasure coding (in my case, rs4.1). I was able to do this keeping the following in mind:

1. The original pool, cephfs_data, must remain as a replicated pool. I'm unsure why, IIRC some metadata can't be kept in erasure coded pools.
2. The metadata pool, cephfs_metadata, must also remain as a replicated pool.
3. Your new pool (the destination pool) can be created however you like.
4. This procedure involves rolling unavailability on a per-file basis.

This is from memory; I should do a better writeup elsewhere, but what I did was this:

1. Create your new pool. `ceph osd pool create  cephfs_data_ec_rs4.1 8 8 erasure rs4.1`
2. Set the xattr for the root directory to use the new pool: `setfattr -n ceph.file.layout.pool -v cephfs_data_ec_rs4.1 /cephfs_mountpoint/`

At this stage all new files will be written to the new pool. Unfortunately you can't change the layout of a file with data, so copying the files back into their own place is required. You can hack up a bash script to do this, or write a converter program. Here's the most relevant bit, per file, which copies the file first and then renames the new file to the old file:

func doConvert(filename string) error {
        poolRewriteName, previousPoolName, err := newNearbyTempFiles(filename)
        if err != nil {
                return err
        }
        err = SetCephFSFileLayoutPool(poolRewriteName, []byte(*toPool))
        if err != nil {
                os.Remove(poolRewriteName)
                os.Remove(previousPoolName)
                return err
        }

        err = CopyFilePermissions(filename, poolRewriteName)
        if err != nil {
                os.Remove(poolRewriteName)
                os.Remove(previousPoolName)
                return err
        }

        //log.Printf("Copying %s to %s\n", filename, poolRewriteName)
        err = CopyFile(filename, poolRewriteName)
        if err != nil {
                os.Remove(poolRewriteName)
                os.Remove(previousPoolName)
                return err
        }

        //log.Printf("Moving %s to %s\n", filename, previousPoolName)
        err = MoveFile(filename, previousPoolName)
        if err != nil {
                os.Remove(poolRewriteName)
                os.Remove(previousPoolName)
                return err
        }

        //log.Printf("Moving %s to %s\n", poolRewriteName, filename)
        err = MoveFile(poolRewriteName, filename)
        os.Remove(poolRewriteName)
        os.Remove(previousPoolName)
        return err
}



On Wed, May 15, 2019 at 10:31 AM Lars Täuber <taeuber@xxxxxxx> wrote:
Hi,

is there a way to migrate a cephfs to a new data pool like it is for rbd on nautilus?
https://ceph.com/geen-categorie/ceph-pool-migration/

Thanks
Lars
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux