Re: Shutting down half / full cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All Ceph flags are global.  Setting them from any server that can (usually osd nodes, mons, etc have the right keyring) will set the flag for the entire cluster.

Setting pause on the cluster will prevent everything changing anything.  OSDs will not be able to be marked down, no map updates will happen.  I prefer not setting that if I can (and I've never set it before) so I can see when OSDs do come back up and can see if there is a problem while I'm performing my maintenance.  The flags to prevent OSDs being marked out and preventing backfilling from happening have always been enough for me.

On Wed, Feb 14, 2018 at 11:08 AM <DHilsbos@xxxxxxxxxxxxxx> wrote:
All;

This might be a noob type question, but this thread is interesting, and there's one thing I would like clarified.

David Turner mentions setting 3 flags on OSDs, Götz has mentioned 5 flags, do the commands need to be run on all OSD nodes, or just one?

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
300 S. Hamilton Pl.
Gilbert, AZ 85233
Phone: (480) 610-3500
Fax: (480) 610-3501
www.PerformAir.com


From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of David Turner
Sent: Wednesday, February 14, 2018 9:02 AM
To: Götz Reinicke
Cc: ceph-users
Subject: Re: Shutting down half / full cluster

ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover

Noout will prevent OSDs from being marked out during the maintenance and no PGs will be able to shift data around with the other 2 flags.  After everything is done, unset the 3 flags and you're good to go.

On Wed, Feb 14, 2018 at 5:25 AM Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx> wrote:
Thanks!

Götz



Am 14.02.2018 um 11:16 schrieb Kai Wagner <kwagner@xxxxxxxx>:

Hi,
maybe it's worth looking at this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html
Kai

On 02/14/2018 11:06 AM, Götz Reinicke wrote:
Hi,

We have some work to do on our power lines for all building and we have to shut down all systems. So there is also no traffic on any ceph client.

Pitty, we have to shot down some ceph nodes too in an affected building.

To avoid rebalancing - as I see there is no need for it, as there is no traffic on clients - how can I safely set the remaining cluster nodes in a „keep calm and wait“ state?

Is that the noout option?

Thanks for feedback and suggestions! Regards . Götz



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Götz Reinicke
IT-Koordinator
IT-OfficeNet
+49 7141 969 82420
goetz.reinicke@xxxxxxxxxxxxxxx
Filmakademie Baden-Württemberg GmbH
Akademiehof 10 71638 Ludwigsburg
http://www.filmakademie.de




Eintragung Amtsgericht Stuttgart HRB 205016
Vorsitzende des Aufsichtsrates:
Petra Olschowski
Staatssekretärin im Ministerium für Wissenschaft,
Forschung und Kunst Baden-Württemberg
Geschäftsführer:
Prof. Thomas Schadt



Götz Reinicke
IT-Koordinator
IT-OfficeNet
+49 7141 969 82420
goetz.reinicke@xxxxxxxxxxxxxxx
Filmakademie Baden-Württemberg GmbH
Akademiehof 10 71638 Ludwigsburg
http://www.filmakademie.de




Eintragung Amtsgericht Stuttgart HRB 205016
Vorsitzende des Aufsichtsrates:
Petra Olschowski
Staatssekretärin im Ministerium für Wissenschaft,
Forschung und Kunst Baden-Württemberg
Geschäftsführer:
Prof. Thomas Schadt

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux