Re: Physical maintainance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks good.
You can start several OSDs at a time as long as you have enough CPU and you're not saturating your drives or controllers.

Jan

> On 13 Jul 2016, at 15:09, Wido den Hollander <wido@xxxxxxxx> wrote:
> 
> 
>> Op 13 juli 2016 om 14:47 schreef Kees Meijs <kees@xxxxxxxx>:
>> 
>> 
>> Thanks!
>> 
>> So to sum up, I'd best:
>> 
>>  * set the noout flag
>>  * stop the OSDs one by one
>>  * shut down the physical node
>>  * jank the OSD drives to prevent ceph-disk(8) from automaticly
>>    activating at boot time
>>  * do my maintainance
>>  * start the physical node
>>  * reseat and activate the OSD drives one by one
>>  * unset the noout flag
>> 
> 
> That should do it indeed. Take your time between the OSDs and that should limit the 'downtime' for clients.
> 
> Wido
> 
>> On 13-07-16 14:39, Jan Schermer wrote:
>>> If you stop the OSDs cleanly then that should cause no disruption to clients.
>>> Starting the OSD back up is another story, expect slow request for a while there and unless you have lots of very fast CPUs on the OSD node, start them one-by-one and not all at once.
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux