Re: How to shutdown a node properly ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/30/2017 12:40 AM, Renaud Fortier wrote:

On my nodes, when i use the system.d script to kill gluster (service glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn’t kill everything !


Killing glusterd does not kill other gluster processes.

When you shutdown a node, everything obviously gets killed but the client does not get notified immediately that the brick went down, leading for it to wait for the 42 second ping-timeout after which it assumes the brick is down. When you kill the brick manually before shutdown, the client immediate  receives the notification and you don't see the hang. See Xavi's description in Bug 1054694.

So if it is a planned shutdown or reboot, it is better to kill the gluster processes before shutting the node down. BTW, you can use https://github.com/gluster/glusterfs/blob/master/extras/stop-all-gluster-processes.sh which automatically checks for pending heals etc before killing the gluster processes.

-Ravi
 

 

De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta@xxxxxxxxx]
Envoyé : 29 juin 2017 13:41
À : Ravishankar N <ravishankar@xxxxxxxxxx>
Cc : gluster-users@xxxxxxxxxxx; Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
Objet : Re: How to shutdown a node properly ?

 

Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown?

 

Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar@xxxxxxxxxx> ha scritto:

On 06/29/2017 08:31 PM, Renaud Fortier wrote:

Hi,

Everytime I shutdown a node, I lost access (from clients) to the volumes for 42 seconds (network.ping-timeout). Is there a special way to shutdown a node to keep the access to the volumes without interruption ? Currently, I use the ‘shutdown’ or ‘reboot’ command.

`killall glusterfs glusterfsd glusterd` before issuing shutdown or reboot. If it is a replica or EC volume, ensure that there are no pending heals before bringing down a node. i.e. `gluster volume heal volname info` should show 0 entries.


 

My setup is :

-4 gluster 3.10.3 nodes on debian 8 (jessie)

-3 volumes Distributed-Replicate 2 X 2 = 4

 

Thank you

Renaud

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux