Re: Reducing ceph cluster size in half

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

There are different ways, but I would : 

- Change OSD weight (and not reweight) I want to remove to 0
- Wait for cluster health
- Stop OSD I want to remove
- If data are ok, remove osd from crushmap.
	- There is a no reason stopping osd impacts your service as they have no data, it’s just a safety check.

- Then decrease PG if needed.

-
Etienne Menguy
etienne.menguy@xxxxxxxx




> On 21 Feb 2022, at 22:58, Jason Borden <jason.borden@xxxxxxxxx> wrote:
> 
> Hi all,
> 
> I'm looking for some advice on reducing my ceph cluster in half. I currently have 40 hosts and 160 osds on a cephadm managed pacific cluster. The storage space is only 12% utilized. I want to reduce the cluster to 20 hosts and 80 osds while keeping the cluster operational. I'd prefer to do this in as few operations as possible instead of draining each host at a time and having to rebalance pgs 20 times. I think I should probably half the number of pgs at the same time too. Does anyone have any advice on how I can safely achieve this?
> 
> Thanks,
> Jason
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux