Re: Physical maintainance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you stop the OSDs cleanly then that should cause no disruption to clients.
Starting the OSD back up is another story, expect slow request for a while there and unless you have lots of very fast CPUs on the OSD node, start them one-by-one and not all at once.


Jan


> On 13 Jul 2016, at 14:37, Wido den Hollander <wido@xxxxxxxx> wrote:
> 
> 
>> Op 13 juli 2016 om 14:31 schreef Kees Meijs <kees@xxxxxxxx>:
>> 
>> 
>> Hi Cephers,
>> 
>> There's some physical maintainance I need to perform on an OSD node.
>> Very likely the maintainance is going to take a while since it involves
>> replacing components, so I would like to be well prepared.
>> 
>> Unfortunately it is no option to add another OSD node or rebalance at
>> this time, so I'm planning to operate in degraded state during the
>> maintainance.
>> 
>> If at all possible, I would to shut down the OSD node cleanly and
>> prevent slow (or even blocking) requests on Ceph clients.
>> 
>> Just setting the noout flag and shutting down the OSDs on the given node
>> is not enough as it seems. In fact clients do not act that well in this
>> case. Connections time out and for a while I/O seems to stall.
>> 
> 
> noout doesn't do anything with the clients, it just tells the cluster not to mark any OSD as out after they go down.
> 
> If you want to do this slowly, take the OSDs down one by one and wait for the PGs to become active+X again.
> 
> When you start, do the same again, start them one by one.
> 
> You will always have a short moment where the PGs are inactive.
> 
>> Any thoughts on this, anyone? For example, is it a sensible idea and are
>> writes still possible? Let's assume there are OSDs on to the
>> to-be-maintained host which are primary for sure.
>> 
>> Thanks in advance!
>> 
>> Cheers,
>> Kees
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux