Re: Downsizing a cephfs pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

I think I would COPY and DELETE in chunks the data not via the 'backend' 
but just via cephfs. So you are 100% sure nothing weird can happen. 
(MOVE is not working as you think on a cephfs between different pools)
You can create and mount an extra data pool in cephfs. I have done this 
also so you can mix rep3 and erasure and a fast ssd pool on you cephfs. 

Adding a pool, something like this:
ceph osd pool set fs_data.ec21 allow_ec_overwrites true
ceph osd pool application enable fs_data.ec21 cephfs
ceph fs add_data_pool cephfs fs_data.ec21

Change a directory to use a different pool:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 folder
getfattr -n ceph.dir.layout.pool folder


-----Original Message-----
From: Brian Topping [mailto:brian.topping@xxxxxxxxx] 
Sent: 08 February 2019 10:02
To: Marc Roos
Cc: ceph-users
Subject: Re:  Downsizing a cephfs pool

Hi Mark, thats great advice, thanks! Im always grateful for the 
knowledge. 

What about the issue with the pools containing a CephFS though? Is it 
something where I can just turn off the MDS, copy the pools and rename 
them back to the original name, then restart the MDS? 

Agreed about using smaller numbers. When I went to using seven disks, I 
was getting warnings about too few PGs per OSD. Im sure this is 
something one learns to cope with via experience and Im still picking 
that up. Had hoped not I get in a bind like this so quickly, but hey, 
here I am again :)

> On Feb 8, 2019, at 01:53, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
> 
> 
> There is a setting to set the max pg per osd. I would set that 
> temporarily so you can work, create a new pool with 8 pg's and move 
> data over to the new pool, remove the old pool, than unset this max pg 

> per osd.
> 
> PS. I am always creating pools starting 8 pg's and when I know I am at 

> what I want in production I can always increase the pg count.
> 
> 
> 
> -----Original Message-----
> From: Brian Topping [mailto:brian.topping@xxxxxxxxx]
> Sent: 08 February 2019 05:30
> To: Ceph Users
> Subject:  Downsizing a cephfs pool
> 
> Hi all, I created a problem when moving data to Ceph and I would be 
> grateful for some guidance before I do something dumb.
> 
> 
> 1.    I started with the 4x 6TB source disks that came together as a 
> single XFS filesystem via software RAID. The goal is to have the same 
> data on a cephfs volume, but with these four disks formatted for 
> bluestore under Ceph.
> 2.    The only spare disks I had were 2TB, so put 7x together. I sized 

> data and metadata for cephfs at 256 PG, but it was wrong.
> 3.    The copy went smoothly, so I zapped and added the original 4x 
6TB 
> disks to the cluster.
> 4.    I realized what I did, that when the 7x2TB disks were removed, 
> there were going to be far too many PGs per OSD.
> 
> 
> I just read over https://stackoverflow.com/a/39637015/478209, but that 

> addresses how to do this with a generic pool, not pools used by 
CephFS.
> It looks easy to copy the pools, but once copied and renamed, CephFS 
> may not recognize them as the target and the data may be lost.
> 
> Do I need to create new pools and copy again using cpio? Is there a 
> better way?
> 
> Thanks! Brian
> 
> 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux