Re: latest TLA has demon showing up as [glusterfs]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, not noticeably. I'll leave it running to see what it does.

-Mic

Anand Avati wrote:
After you unmount, is the cpu consuption going up?

avati

2008/2/29, Mickey Mazarick <mic@xxxxxxxxxxxxxxxxxx <mailto:mic@xxxxxxxxxxxxxxxxxx>>:

    It's noting that complicated...
    one mount one process that sticks around. I am using ib-verbs btw.

    [root@RTPST205 ~]# ps -ef |grep
    gluster
    root     14423 13814  0 15:14 pts/1    00:00:00 grep gluster
    [root@RTPST205 ~]# /usr/local/sbin/glusterfs -l

    /var/glustersystemclient.log -f /etc/glusterfs-system.vol -d
    disable /system

    [root@RTPST205 ~]# ps -ef |grep gluster
    root     10972     1  0 12:16 ?        00:00:01
    [glusterfs]

    root     14438 13814  0 15:16 pts/1    00:00:00 grep gluster
    [root@RTPST205 ~]# umount /system
    [root@RTPST205 ~]# ps -ef |grep gluster
    root     10972     1  0 12:16 ?        00:00:02
    [glusterfs]

    root     14451 13814  0 15:17 pts/1    00:00:00 grep gluster


    -Mic


    Sascha Ottolski wrote:
    > Am Freitag 29 Februar 2008 12:33:36 schrieb Anand Avati:
    >
    >> depends on whether you passed a mountpoint in the command line
    >> argument. do you mean that the glusterfs client is still running
    >> after you unmounted? or did you mean that before unmounting there
    >> were two [glusterfs] and after unmounting there is just one (which
    >> would be the server) ?
    >>
    >
    > if I my add my 2 cents: as I reported in an earlier posting, it is
    > possible to mount the same mount-point several times, which
    results in
    > serveral glusterfs processes running. in such a case, you need
    to kill
    > all of them or umount several times.
    >
    > of course, it would be best to prevent a second mount of a already
    > mounted gluster share (someone posted a recipe on how one could to
    > this; may this should be patched into the sources?).
    >
    >
    > Cheers, Sascha
    >
    >
    >
    >> avati
    >>
    >
    >
    >> 2008/2/29, Mickey Mazarick <mic@xxxxxxxxxxxxxxxxxx
    <mailto:mic@xxxxxxxxxxxxxxxxxx>>:
    >>
    >>> Ah I see! I have an additional comment/question though.
    >>>
    >>> I've noticed that when I unmount a gluster volume on a client the
    >>> [glusterfs] process is still running.  Perhaps it thinks my cleint
    >>> spec is a server.
    >>> I have no instance of "type protocol/server" in the client spec.
    >>> What does it use to determine weather it's a client or a server?
    >>>
    >>> -Mickey Mazarick
    >>>
    >>> Anand Avati wrote:
    >>>
    >>>> Mickey,
    >>>>  in the latest codebase, there are no more server and client
    >>>> programs. there is just one glusterfs (and glusterfsd is a
    >>>> symlink to glusterfs). it behaves either as a client or server
    >>>> according to the volume spec file given.
    >>>>
    >>>>  if the glusterfs program is passed a mountpoint it attaches a
    >>>> fuse translator in a hardcoded way, so this preserves backward
    >>>> compatibility. The new model also allows both protocol/server and
    >>>> mount/fuse (mountability) in the same spec file which can be a
    >>>> performance improvement in NUFA mode of operations.
    >>>>
    >>>> avati
    >>>>
    >>>> 2008/2/29, Mickey Mazarick <mic@xxxxxxxxxxxxxxxxxx
    <mailto:mic@xxxxxxxxxxxxxxxxxx>
    >>>>
    >>>> <mailto:mic@xxxxxxxxxxxxxxxxxx <mailto:mic@xxxxxxxxxxxxxxxxxx>>>:
    >>>>
    >>>>
    >>>>     This is very minor but it did break a kill script I wrote.
    >>>>
    >>>>     When I run the latest build, the server daemon shows up as:
    >>>>     [glusterfs]
    >>>>     in the process list.
    >>>>
    >>>>     Just an FYI :-)
    >>>>     Thanks!
    >>>>     -Mickey Mazarick
    >>>>
    >>>>     --
    >>>>
    >>>>
    >>>>     _______________________________________________
    >>>>     Gluster-devel mailing list
    >>>>
    >>>>     Gluster-devel@xxxxxxxxxx
    <mailto:Gluster-devel@xxxxxxxxxx> <mailto:Gluster-devel@xxxxxxxxxx
    <mailto:Gluster-devel@xxxxxxxxxx>>
    >>>>
    >>>>     http://lists.nongnu.org/mailman/listinfo/gluster-devel
    >>>>
    >>>>
    >>>>
    >>>>
    >>>> --
    >>>> If I traveled to the end of the rainbow
    >>>> As Dame Fortune did intend,
    >>>> Murphy would be there to tell me
    >>>> The pot's at the other end.
    >>>>
    >>> --
    >>>
    >
    >
    >
    >
    > _______________________________________________
    > Gluster-devel mailing list
    > Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx>
    > http://lists.nongnu.org/mailman/listinfo/gluster-devel
    >



    --



    _______________________________________________
    Gluster-devel mailing list
    Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx>
    http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.


--




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux