Downsizing a cephfs pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, I created a problem when moving data to Ceph and I would be grateful for some guidance before I do something dumb.

  1. I started with the 4x 6TB source disks that came together as a single XFS filesystem via software RAID. The goal is to have the same data on a cephfs volume, but with these four disks formatted for bluestore under Ceph.
  2. The only spare disks I had were 2TB, so put 7x together. I sized data and metadata for cephfs at 256 PG, but it was wrong.
  3. The copy went smoothly, so I zapped and added the original 4x 6TB disks to the cluster.
  4. I realized what I did, that when the 7x2TB disks were removed, there were going to be far too many PGs per OSD.

I just read over https://stackoverflow.com/a/39637015/478209, but that addresses how to do this with a generic pool, not pools used by CephFS. It looks easy to copy the pools, but once copied and renamed, CephFS may not recognize them as the target and the data may be lost.

Do I need to create new pools and copy again using cpio? Is there a better way?

Thanks! Brian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux