Re: One host with 24 OSDs is offline - best way to get it back online

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I went through this as I reformatted all the OSDs with a much smaller cluster last weekend. When turning nodes back on, PGs would sometimes move, only to move back, prolonging the operation and system stress. 

What I took away is it’s least overall system stress to have the OSD tree back to target state as quickly as safe and practical. Replication will happen as replication will, but if the strategy changes midway, it just means the same speed of movement over a longer time. 

> On Jan 26, 2019, at 15:41, Chris <bitskrieg@xxxxxxxxxxxxx> wrote:
> 
> It sort of depends on your workload/use case.  Recovery operations can be computationally expensive.  If your load is light because its the weekend you should be able to turn that host back on  as soon as you resolve whatever the issue is with minimal impact.  You can also increase the priority of the recovery operation to make it go faster if you feel you can spare additional IO and it won't affect clients.
> 
> We do this in our cluster regularly and have yet to see an issue (given that we take care to do it during periods of lower client io)
> 
>> On January 26, 2019 17:16:38 Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx> wrote:
>> 
>> Hi,
>> 
>> one host out of 10 is down for yet unknown reasons. I guess a power failure. I could not yet see the server.
>> 
>> The Cluster is recovering and remapping fine, but still has some objects to process.
>> 
>> My question: May I just switch the server back on and in best case, the 24 OSDs get back online and recovering will do the job without problems.
>> 
>> Or what might be a good way to handle that host? Should I first wait till the recover is finished?
>> 
>> Thanks for feedback and suggestions - Happy Saturday Night  :) . Regards . Götz
>> 
>> 
>> ----------
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux