Thanks everybody for the answers! I'll follow the suggestions outlined below.. when the LAB here is back up :O v On Wed 26 Feb 2014 09:57:31, Xavier Hernandez wrote: > Hi Viktor, > > if you want to stop gluster completely in a controlled way: > > * umount all current mounted volumes > > this will stop the glusterfs process for the mount point. > > * gluster volume stop <volname> > > this will stop the glusterfsd processes for the bricks of the > specified volume, the glusterfs process for NFS server of that > volume (if enabled), and the glusterfs process corresponding to the > self-heal daemon (if all volumes on the same server are stopped) > > * service glusterd stop > > this will stop the glusterd process > > After these steps there shouldn't be any gluster process running on > any server. > > If you only want to stop all processes from one server, I don't know > any other way than manually killing gluster processes. > > Xavi > > El 26/02/14 03:33, Viktor Villafuerte ha escrit: > >Ok.. so you claim this is a feature :) > >So, how do you stop Gluster when you want to stop it then? > > > >v > > > >On Tue 25 Feb 2014 15:31:18, Joe Julian wrote: > >>Why is that a problem? Having the ability to restart management daemon without interrupting clients is a common and useful thing. > >> > >>On February 25, 2014 3:23:31 PM PST, Viktor Villafuerte <viktor.villafuerte@xxxxxxxxxxxxxxx> wrote: > >>>Hi, > >>> > >>>I've got the same problem here. I did a completely new installation (no > >>>upgrades) and when I do 'service glusterd stop' and after 'status' it > >>>gives the same message. In the meantime there are other about 5 > >>>processes > >>>glusterfsd + 4 x glusterfs > >>>that are still running. I can issue 'service glusterfsd stop' which > >>>stops the 'glusterfsd' process but the others stay running. In the logs > >>>there are 'I' messages about bricks/hosts not being available. > >>> > >>>It seems that I'm unable to stop gluster unless I start manually > >>>killing > >>>processes :( > >>> > >>>v3.4.2-1 from Gluster/latest/RHEL6/6.5 > >>> > >>> > >>>Also there other problems I can see, but I won't confuse this post with > >>>them.. > >>> > >>>v > >>> > >>> > >>>On Tue 25 Feb 2014 11:20:09, Khoi Mai wrote: > >>>>When you tried gluster3.4.2-1. did you mean you upgraded it in place > >>>while > >>>>glusterd was running? Are you missing glusterfs-libs, meaning it > >>>didn't > >>>>upgrade with all your other glusterfs packages? Lastly, did you > >>>reboot? > >>>> > >>>>Khoi Mai > >>>>Union Pacific Railroad > >>>>Distributed Engineering & Architecture > >>>>Project Engineer > >>>> > >>>> > >>>> > >>>>** > >>>> > >>>>This email and any attachments may contain information that is > >>>confidential and/or privileged for the sole use of the intended > >>>recipient. Any use, review, disclosure, copying, distribution or > >>>reliance by others, and any forwarding of this email or its contents, > >>>without the express permission of the sender is strictly prohibited by > >>>law. If you are not the intended recipient, please contact the sender > >>>immediately, delete the e-mail and destroy all copies. > >>>>** > >>>>_______________________________________________ > >>>>Gluster-users mailing list > >>>>Gluster-users@xxxxxxxxxxx > >>>>http://supercolony.gluster.org/mailman/listinfo/gluster-users > >>> > >>>-- > >>>Regards > >>> > >>>Viktor Villafuerte > >>>Optus Internet Engineering > >>>t: 02 808-25265 > >>>_______________________________________________ > >>>Gluster-users mailing list > >>>Gluster-users@xxxxxxxxxxx > >>>http://supercolony.gluster.org/mailman/listinfo/gluster-users > >>-- > >>Sent from my Android device with K-9 Mail. Please excuse my brevity. > -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users