Re: rebooting nodes in a ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So is it recommended to adjust  the rebalance timeout to align with the time to reboot individual nodes?  

I didn't see this in my pass through the ops manual but maybe I'm not looking in the right place. 

Thanks,

~jpr

> On Dec 19, 2013, at 6:51 PM, "Sage Weil" <sage@xxxxxxxxxxx> wrote:
> 
>> On Thu, 19 Dec 2013, John-Paul Robinson wrote:
>> What impact does rebooting nodes in a ceph cluster have on the health of
>> the ceph cluster?  Can it trigger rebalancing activities that then have
>> to be undone once the node comes back up?
>> 
>> I have a 4 node ceph cluster each node has 11 osds.  There is a single
>> pool with redundant storage.
>> 
>> If it takes 15 minutes for one of my servers to reboot is there a risk
>> that some sort of needless automatic processing will begin?
> 
> By default, we start rebalancing data after 5 minutes.  You can adjust 
> this (to, say, 15 minutes) with
> 
> mon osd down out interval = 900
> 
> in ceph.conf.
> 
> sage
> 
>> 
>> I'm assuming that the ceph cluster can go into a "not ok" state but that
>> in this particular configuration all the data is protected against the
>> single node failure and there is no place for the data to migrate too so
>> nothing "bad" will happen.
>> 
>> Thanks for any feedback.
>> 
>> ~jpr
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux