Adding multiple osd's to an active cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As described recently in several other threads, we like to add OSDs in to
their proper CRUSH location, but with the following parameter set:

  osd crush initial weight = 0

We then bring the OSDs in to the cluster (0 impact in our environment), and
then gradually increase CRUSH weight to bring them to their final desired
value, all at the same time.

The script I use basically checks for all OSDs less than our target weight
in each iteration, and moves it closer to the target weight by a defined
increment and then waiting for HEALTH_OK or other acceptable state.

I would suggest starting with .001 with large groups of OSDs. We can
comfortably bring in 100 OSDs with increments of .004 at a time or so..
Theoretically we could just let them all weight in at once, but this allows
us to find a comfortable rate and pause the process whenever/wherever we
want if it does cause issues.

Hope that helps.

On Fri, Feb 17, 2017 at 1:42 AM, nigel davies <nigdav007 at gmail.com> wrote:

> Hay All
>
> How is the best way to added multiple osd's to an active cluster?
>
> As the last time i done this i all most killed the VM's we had running on
> the cluster
>
> Thanks
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.andrus at DreamHost.com | www.dreamhost.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170217/4a5e776e/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux