I just discovered yesterday that the systemd configs (in the fedora rpms) do, indeed, stop the bricks. I think I know how to fix that and will test that and submit a bug report today and a patch. Patrick Irvine <pirv at cybersites.ca> wrote: >Hey, > >stopping the glusterd instance does not stop any of the other spawned >daemons. I know this for a fact as I start and stop glusterd all the >time with out it affecting any of the other daemons. > >As for stopping the spawned daemons, Craig Carl ?? ( I think that's >right) years ago when glusterd first came out said to just kill <pid> >each of the others. To restart them your just stop and restart the >glusterd process and it will respawn any it finds are not already >running. > >Hope this helps, > >Pat. > >On 10/04/2013 9:54 AM, Jay Vyas wrote: >> This is a great question, something I've been wondering. >> >> Reposting some details from jeff darcy's email regarding a similar >> question which i asked could help shed some light on this: >> >> 1) The daemons that run in gluster are: >> >> glusterd = management daemon >> glusterfsd = per-brick daemon >> glustershd = self-heal daemon >> glusterfs = usually client-side, but also NFS on servers >> >> 2) The lifecycle of the daemons: >> *** The others are all started from glusterd, in response to volume >> start and stop commands *** >> *** They're actually all the same executable with different >> translators *** >> *** glusterfs-server = the server side gluster implementation, which >> needs to be instaled for serving gluster data *** >> >> 3) When glusterd starts up: It spawns any daemons that "should" be >> running (according to which volumes are started, which have NFS or >> replication enabled, etc.) and seem to be missing. >> >> >> So... >> >> If thats the case then I would say that ***stopping glusterd*** >should >> invert the "starting" of the above processes ... right? >> But I would leave it to the gluster vets to answer this >definitively... >> >> >> >> >> On Wed, Apr 10, 2013 at 11:51 AM, Guido De Rosa >> <guido.derosa at vemarsas.it <mailto:guido.derosa at vemarsas.it>> wrote: >> >> Hello list, >> >> I've installed GlusterFS via Debian experimental packages, >version >> 3.4.0~qa9realyalpha2-1. >> >> ( For the records, the reason I use an alpha release is that I >want >> this feature: >> >http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ >> ) >> >> I've also followed the Quick Start Guide and now I have a cluster >of 2 >> virtual machines, each contributing to a Gluster volume with one >brick >> each. >> >> Now my issue: >> >> Let's assume no machine has actually mounted the Gluster volume. >> >> If I do: >> >> ps aux | grep gluster >> >> I get a couple of daemons: glusterd, glusterfsd, glusterfs. >> >> If I do: >> >> /etc/init.d/glusterfs-server stop >> >> I find (re-issuing ps) that glusterd has been terminated BUT the >other >> processes (glusterfs and glusterfsd instances) *are still >running*. >> >> (The same happens if I manually kill the glusterd process). >> >> Is this normal? Doesn't this leave the system in an inconsistent >> state? (For example on system shutdown). >> >> Should the init script be fixed? (maybe including "gluster volume >> stop" or something)? >> >> What's the best practice to terminate *all* Gluster related >process >> (especially on system shutdown/reboot)? >> >> Thanks, >> Guido >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> >> >> >> -- >> Jay Vyas >> http://jayunit100.blogspot.com >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > >------------------------------------------------------------------------ > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://supercolony.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130410/3514cf65/attachment.html>