Re: Migrating a cephfs data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
Afaik is the mv now fast because it is not moving any real data, just 
some meta data. Thus a real mv will be slow (only in the case between 
different pools) because it copies the data to the new pool and when 
successful deletes the old one. This will of course take a lot more 
time, but you at least are able to access the cephfs on both locations 
during this time and can fix things in your client access.

My problem with mv now is that if you accidentally use it between data 
pools, it does not really move data. 



-----Original Message-----
From: Robert LeBlanc [mailto:robert@xxxxxxxxxxxxx] 
Sent: vrijdag 28 juni 2019 18:30
To: Marc Roos
Cc: ceph-users; jgarcia
Subject: Re:  Migrating a cephfs data pool

Given that the MDS knows everything, it seems trivial to add a ceph 'mv' 
command to do this. I looked at using tiering to try and do the move, 
but I don't know how to tell cephfs that the data is now on the new pool 
instead of the old pool name. Since we can't take a long enough downtime 
to move hundreds of Terabytes, we need something that can be done 
online, and if it has a minute or two of downtime would be okay.

----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Jun 28, 2019 at 9:02 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:


	 
	
	1.
	change data pool for a folder on the file system:
	setfattr -n ceph.dir.layout.pool -v fs_data.ec21 foldername
	
	2. 
	cp /oldlocation /foldername
	Remember that you preferably want to use mv, but this leaves (meta) 
data 
	on the old pool, that is not what you want when you want to delete 
that 
	pool.
	
	3. When everything is copied-removed, you should end up with an 
empty 
	datapool with zero objects. 
	
	4. Verify here with others, if you can just remove this one.
	
	I think this is a reliable technique to switch, because you use the 

	basic cephfs functionality that supposed to work. I prefer that the 
ceph 
	guys implement a mv that does what you expect from it. Now it acts 
more 
	or less like a linking.
	
	
	
	
	-----Original Message-----
	From: Jorge Garcia [mailto:jgarcia@xxxxxxxxxxxx] 
	Sent: vrijdag 28 juni 2019 17:52
	To: Marc Roos; ceph-users
	Subject: Re:  Migrating a cephfs data pool
	
	Are you talking about adding the new data pool to the current 
	filesystem? Like:
	
	   $ ceph fs add_data_pool my_ceph_fs new_ec_pool
	
	I have done that, and now the filesystem shows up as having two 
data 
	pools:
	
	   $ ceph fs ls
	   name: my_ceph_fs, metadata pool: cephfs_meta, data pools: 
	[cephfs_data new_ec_pool ]
	
	but then I run into two issues:
	
	1. How do I actually copy/move/migrate the data from the old pool 
to the 
	new pool?
	2. When I'm done moving the data, how do I get rid of the old data 
pool? 
	
	I know there's a rm_data_pool option, but I have read on the 
mailing 
	list that you can't remove the original data pool from a cephfs 
	filesystem.
	
	The other option is to create a whole new cephfs with a new 
metadata 
	pool and the new data pool, but creating multiple filesystems is 
still 
	experimental and not allowed by default...
	
	On 6/28/19 8:28 AM, Marc Roos wrote:
	>   
	> What about adding the new data pool, mounting it and then moving 
the 
	> files? (read copy because move between data pools does not what 
you 
	> expect it do)
	>
	>
	> -----Original Message-----
	> From: Jorge Garcia [mailto:jgarcia@xxxxxxxxxxxx]
	> Sent: vrijdag 28 juni 2019 17:26
	> To: ceph-users
	> Subject: *****SPAM*****  Migrating a cephfs data pool
	>
	> This seems to be an issue that gets brought up repeatedly, but I 
	> haven't seen a definitive answer yet. So, at the risk of 
repeating a 
	> question that has already been asked:
	>
	> How do you migrate a cephfs data pool to a new data pool? The 
obvious 
	> case would be somebody that has set up a replicated pool for 
their 
	> cephfs data and then wants to convert it to an erasure code pool. 
Is 
	> there a simple way to do this, other than creating a whole new 
ceph 
	> cluster and copying the data using rsync?
	>
	> Thanks for any clues
	>
	> Jorge
	>
	> _______________________________________________
	> ceph-users mailing list
	> ceph-users@xxxxxxxxxxxxxx
	> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	>
	>
	
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux