Shrinking lab cluster to free hardware for a new deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17-03-08 15:39, Kevin Olbrich wrote:
> Hi!
>
> Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
> We want to shut down the cluster but it holds some semi-productive VMs 
> we might or might not need in the future.
> To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we 
> use size 2 and min_size 1).
>
> Should I set the OSDs out one by one or with norefill, norecovery 
> flags set but all at once?
> If last is the case, which flags should be set also?
>
just set OSDs out and wait for them to rebalace, OSDs will be active and 
serve traffic while data will be moving off them. I had a case where 
some pgs wouldn't move out, so after everything settles, you may need to 
remove OSDs from crush one by one.

> Thanks!
>
> Kind regards,
> Kevin Olbrich.
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170308/4486e90a/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux