Re: Downsizing a cephfs pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark, that’s great advice, thanks! I’m always grateful for the knowledge. 

What about the issue with the pools containing a CephFS though? Is it something where I can just turn off the MDS, copy the pools and rename them back to the original name, then restart the MDS? 

Agreed about using smaller numbers. When I went to using seven disks, I was getting warnings about too few PGs per OSD. I’m sure this is something one learns to cope with via experience and I’m still picking that up. Had hoped not I get in a bind like this so quickly, but hey, here I am again :)

> On Feb 8, 2019, at 01:53, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
> 
> 
> There is a setting to set the max pg per osd. I would set that 
> temporarily so you can work, create a new pool with 8 pg's and move data 
> over to the new pool, remove the old pool, than unset this max pg per 
> osd.
> 
> PS. I am always creating pools starting 8 pg's and when I know I am at 
> what I want in production I can always increase the pg count.
> 
> 
> 
> -----Original Message-----
> From: Brian Topping [mailto:brian.topping@xxxxxxxxx] 
> Sent: 08 February 2019 05:30
> To: Ceph Users
> Subject:  Downsizing a cephfs pool
> 
> Hi all, I created a problem when moving data to Ceph and I would be 
> grateful for some guidance before I do something dumb.
> 
> 
> 1.    I started with the 4x 6TB source disks that came together as a 
> single XFS filesystem via software RAID. The goal is to have the same 
> data on a cephfs volume, but with these four disks formatted for 
> bluestore under Ceph.
> 2.    The only spare disks I had were 2TB, so put 7x together. I sized 
> data and metadata for cephfs at 256 PG, but it was wrong.
> 3.    The copy went smoothly, so I zapped and added the original 4x 6TB 
> disks to the cluster.
> 4.    I realized what I did, that when the 7x2TB disks were removed, 
> there were going to be far too many PGs per OSD.
> 
> 
> I just read over https://stackoverflow.com/a/39637015/478209, but that 
> addresses how to do this with a generic pool, not pools used by CephFS. 
> It looks easy to copy the pools, but once copied and renamed, CephFS may 
> not recognize them as the target and the data may be lost.
> 
> Do I need to create new pools and copy again using cpio? Is there a 
> better way?
> 
> Thanks! Brian
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux