Re: Maintenance mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try "ceph osd set noout" beforehand and then "ceph osd unset noout". That will prevent any OSDs from getting removed from the mapping, so no data will be rebalanced. I don't think there's a way to prevent OSDs from getting zapped on an individual basis, though.  
This is described briefly in the docs at http://ceph.com/docs/master/rados/operations/troubleshooting-osd/?highlight=noout, though it could probably be a bit more clear.
-Greg


On Thursday, January 31, 2013 at 11:40 PM, Alexis GÜNST HORN wrote:

> Hello to all,
>  
> Here is my setup :
>  
> - 2 racks
> - osd1 .. osd6 in rack1
> - osd7 .. osd12 in rack2
> - replica = 2
> - CRUSH map set to put replicas accross racks
>  
> My question :
> Let's imagine that one day, I need to unplug one of the racks (let's
> say, rack1). No problem because an other copy of my objects will be in
> the other rack. But, if i do it, Ceph will start to rebalance data
> accross OSDs.
>  
> So, is there a way to put nodes in "Maintenance mode", in order to put
> Ceph in "degraded" mode, but avoiding any remaping.
>  
> The idea is to have a command like :
>  
> $ ceph osd set maintenance=on osd.1
> $ ceph osd set maintenance=on osd.2
> $ ceph osd set maintenance=on osd.3
> $ ceph osd set maintenance=on osd.4
> $ ceph osd set maintenance=on osd.5
> $ ceph osd set maintenance=on osd.6
>  
> So Ceph knows that 6 osds are down, goes into degraded mode, but
> without remapping data.
> Then, once maintenance finished, i'll only have to do the opposite :
>  
> $ ceph osd set maintenance=off osd.1
> $ ceph osd set maintenance=off osd.2
> $ ceph osd set maintenance=off osd.3
> $ ceph osd set maintenance=off osd.4
> $ ceph osd set maintenance=off osd.5
> $ ceph osd set maintenance=off osd.6
>  
> What do you think ?
> I know that there is a way in Ceph doc to do so with reweight, but
> it's a little bit complex...
>  
> What do you think ?
> Thanks,
>  
> Alexis
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx (mailto:majordomo@xxxxxxxxxxxxxxx)
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux