Re: Shrinking lab cluster to free hardware for a new deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



AFAIK depending on how many you have, you are likely to end up with 'too many pgs per OSD' warning for your main pool if you do this, because the number of PGs in a pool cannot be reduced and there will be less OSDs to put them on.

-Ben

On Wed, Mar 8, 2017 at 5:53 AM, Henrik Korkuc <lists@xxxxxxxxx> wrote:
On 17-03-08 15:39, Kevin Olbrich wrote:
Hi!

Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
We want to shut down the cluster but it holds some semi-productive VMs we might or might not need in the future.
To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use size 2 and min_size 1).

Should I set the OSDs out one by one or with norefill, norecovery flags set but all at once?
If last is the case, which flags should be set also?

just set OSDs out and wait for them to rebalace, OSDs will be active and serve traffic while data will be moving off them. I had a case where some pgs wouldn't move out, so after everything settles, you may need to remove OSDs from crush one by one.

Thanks!

Kind regards,
Kevin Olbrich.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux