Re: Migrating a cephfs data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

1.
change data pool for a folder on the file system:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername

2. 
cp /oldlocation /foldername
Remember that you preferably want to use mv, but this leaves (meta) data 
on the old pool, that is not what you want when you want to delete that 
pool.

3. When everything is copied-removed, you should end up with an empty 
datapool with zero objects. 

4. Verify here with others, if you can just remove this one.

I think this is a reliable technique to switch, because you use the 
basic cephfs functionality that supposed to work. I prefer that the ceph 
guys implement a mv that does what you expect from it. Now it acts more 
or less like a linking.




-----Original Message-----
From: Jorge Garcia [mailto:jgarcia@xxxxxxxxxxxx] 
Sent: vrijdag 28 juni 2019 17:52
To: Marc Roos; ceph-users
Subject: Re:  Migrating a cephfs data pool

Are you talking about adding the new data pool to the current 
filesystem? Like:

   $ ceph fs add_data_pool my_ceph_fs new_ec_pool

I have done that, and now the filesystem shows up as having two data 
pools:

   $ ceph fs ls
   name: my_ceph_fs, metadata pool: cephfs_meta, data pools: 
[cephfs_data new_ec_pool ]

but then I run into two issues:

1. How do I actually copy/move/migrate the data from the old pool to the 
new pool?
2. When I'm done moving the data, how do I get rid of the old data pool? 

I know there's a rm_data_pool option, but I have read on the mailing 
list that you can't remove the original data pool from a cephfs 
filesystem.

The other option is to create a whole new cephfs with a new metadata 
pool and the new data pool, but creating multiple filesystems is still 
experimental and not allowed by default...

On 6/28/19 8:28 AM, Marc Roos wrote:
>   
> What about adding the new data pool, mounting it and then moving the 
> files? (read copy because move between data pools does not what you 
> expect it do)
>
>
> -----Original Message-----
> From: Jorge Garcia [mailto:jgarcia@xxxxxxxxxxxx]
> Sent: vrijdag 28 juni 2019 17:26
> To: ceph-users
> Subject: *****SPAM*****  Migrating a cephfs data pool
>
> This seems to be an issue that gets brought up repeatedly, but I 
> haven't seen a definitive answer yet. So, at the risk of repeating a 
> question that has already been asked:
>
> How do you migrate a cephfs data pool to a new data pool? The obvious 
> case would be somebody that has set up a replicated pool for their 
> cephfs data and then wants to convert it to an erasure code pool. Is 
> there a simple way to do this, other than creating a whole new ceph 
> cluster and copying the data using rsync?
>
> Thanks for any clues
>
> Jorge
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux