Re: Reducing ceph cluster size in half

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This might be easiest to work about in two steps: Draining hosts, and doing a PG merge. You can do it in either order (though thinking about it, doing the merge first will give you more cluster-wide resources to do it faster).

Draining the hosts can be done in a few ways, too. If you want to do it in one shot, you can set nobackfill, then set the crush/reweights for the OSDs to zero, let the peering storm settle, and unset nobackfill. This is probably the easiest option if a brief peering storm and backfill_wait isn't a concern.

If you want to reduce backfill_wait PGs, you can use something like `pgremapper drain`, but this will likely involve multiple data movements: The initial drain is fine, but the CRUSH removal of hosts will cause the upmaps to be lost (which can be `pgremapper cancel-backfill` away). Additional data movement will be needed if you want to `pgremapper undo-upmaps` to clean up what was canceled (or if you use the balancer and it wants to move things).


On 2022-02-21 17:58, Jason Borden wrote:
Hi all,

I'm looking for some advice on reducing my ceph cluster in half. I
currently have 40 hosts and 160 osds on a cephadm managed pacific
cluster. The storage space is only 12% utilized. I want to reduce the
cluster to 20 hosts and 80 osds while keeping the cluster operational.
I'd prefer to do this in as few operations as possible instead of
draining each host at a time and having to rebalance pgs 20 times. I
think I should probably half the number of pgs at the same time too.
Does anyone have any advice on how I can safely achieve this?

Thanks,
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux