Re: Files not available in all clients immediately

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amar,

I replaced server and all clients with 710. Using this version I was able to mount glusterfs server . But, the original problem remains:

2008-03-18 22:54:40 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 8728: /QHc9Gi1lXRnURud3MmGz3tyjSgLfmT5M => 204460 Rehashing because st_nlink less than dentry maps 2008-03-18 22:54:40 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 8735: /pnnLJ3EstrnDQQHjQtRJd5VwjKye2Bhn => 204462 Rehashing because st_nlink less than dentry maps 2008-03-18 22:54:40 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 8742: /ysIRd6Pss7Z6W7GP16z0UQ96y4VRHTIJ => 204463 Rehashing because st_nlink less than dentry maps 2008-03-18 22:55:34 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9599: /9BFa3KJ9cpkbai6AzXaMIborxLYao2lt => 204435 Rehashing because st_nlink less than dentry maps 2008-03-18 22:55:52 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9827: /KBHHPXIUFzJvje2sWk9QqzEqoAOLpGwf => 204495 Rehashing because st_nlink less than dentry maps 2008-03-18 22:55:52 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9833: /51MOR3FyFCsQPQuBqaL0NnrSYDRjcwj6 => 204518 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:01 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9927: /Lqnkrr4420JJyZ2XJqGkqsgR9Q0UeX6T => 204466 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:01 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9934: /T4XxoQpSxMELBBIwpC3Whme5t8yYMRJa => 204471 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:01 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9948: /YC0qJs9fREgAE9UFkGiFGgkAa1bNYHt5 => 204488 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:01 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 9955: /2EzNMzA38tj3fCLGJkrzPBeU49vleWKM => 204489 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:23 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10323: /gV2xJR4hsHZj4olgRRlUFJTzcSIRym3a => 204448 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:24 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10341: /eQEObb9IFV4VLzpvXHkxm90mjW85ih5j => 204416 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:24 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10347: /737k73PiweusfPf4CN6SPTXmRVGeBC5O => 204421 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:24 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10357: /FVqYNAeqgDYIxzahNybKtk0gVw6TX1kr => 204497 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:24 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10364: /WKW7SNvGeRRGzl7USIIrQwsrKKDtIu69 => 204498 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:28 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 10402: /fWTxnBDfas3pth57dA4dwGYgfrOYWB2G => 204398 Rehashing because st_nlink less than dentry maps 2008-03-18 22:56:35 W [fuse-bridge.c:402:fuse_entry_cbk] glusterfs-fuse: 11919: /Q5eQvZg1E8L3Uaj0X1bDD5B9LAGRwevB => 204486 Rehashing because st_nlink less than dentry maps

Regards,

Claudio


Amar S. Tumballi wrote:
Hi Claudio,
I made a fix for that bug and patch-710 should work fine for you. You can just upgrade the client machine to make a quick test.

Regards,
Amar

On Tue, Mar 18, 2008 at 5:37 PM, Amar S. Tumballi <amar@xxxxxxxxxxxxx <mailto:amar@xxxxxxxxxxxxx>> wrote:

    Nope, thats the latest. But this should be fixed soon (at office
    time IST).
    sorry for the inconvenience.

    -amar


    On Tue, Mar 18, 2008 at 5:07 PM, Claudio Cuqui
    <claudio@xxxxxxxxxxxxxxxx <mailto:claudio@xxxxxxxxxxxxxxxx>> wrote:

        Hi Avati,

        I tried, but it don´t even allow me to start it:

        TLA Repo Revision: glusterfs--mainline--2.5--patch-709
        Time : 2008-03-18 20:52:32
        Signal Number : 11

        /C3Systems/gluster/bin/sbin/glusterfs -f
        /C3Systems/gluster/bin/etc/glusterfs/glusterfs-client.vol -l
        /C3Systems/gluster/bin-patch709/var/log/glusterfs/glusterfs.log -L
        WARNING /C3Systems/data/domains/webmail.pop.com.br/attachments
        volume fuse
         type mount/fuse
         option direct-io-mode 1
         option entry-timeout 1
         option attr-timeout 1
         option mount-point
        /C3Systems/data/domains/webmail.pop.com.br/attachments
         subvolumes iocache
        end-volume

        volume iocache
         type performance/io-cache
         option page-count 2
         option page-size 256KB
         subvolumes readahead
        end-volume

        volume readahead
         type performance/read-ahead
         option page-count 2
         option page-size 1MB
         subvolumes client
        end-volume

        volume client
         type protocol/client
         option remote-subvolume attachments
         option remote-host 200.175.8.85 <http://200.175.8.85>
         option transport-type tcp/client
        end-volume

        frame : type(1) op(34)

        /lib64/libc.so.6[0x3edca300c0]
        /lib64/libc.so.6(strcmp+0x0)[0x3edca75bd0]
        /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/mount/fuse.so[0x2aaaab302937]
        /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/mount/fuse.so[0x2aaaab302b42]
        /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/performance/io-cache.so(ioc_lookup_cbk+0x67)[0x2aaaab0f6557]
        /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0[0x2aaaaaab8344]
        /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/protocol/client.so(client_lookup_cbk+0x1b3)[0x2aaaaace93a3]
        /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0x8fc)[0x2aaaaace273c]
        /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0(sys_epoll_iteration+0xc0)[0x2aaaaaabdb90]
        /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0(poll_iteration+0x75)[0x2aaaaaabd095]
        [glusterfs](main+0x658)[0x4026b8]
        /lib64/libc.so.6(__libc_start_main+0xf4)[0x3edca1d8a4]
        [glusterfs][0x401b89]
        ---------

        Is there any other release that I should try ?

        Regards,

        Cuqui

        Anand Avati wrote:
        > Claudio,
        >  Can you try with glusterfs--mainline--2.5--patch-709 ? A
        similar
        > issue is addressed in that revision. We are interested to
        know if that
        > solves your issue as well.
        >
        > thanks,
        >
        > avati
        >
        > 2008/3/19, Claudio Cuqui <claudio@xxxxxxxxxxxxxxxx
        <mailto:claudio@xxxxxxxxxxxxxxxx>
        > <mailto:claudio@xxxxxxxxxxxxxxxx
        <mailto:claudio@xxxxxxxxxxxxxxxx>>>:
        >
        >     Hi there !
        >
        >     We are using gluster on an environment with multiple
        webservers
        >     and load
        >     balancer, where we have only one server and multiple
        clients (6).
        >     All servers are running Fedora Core 6 X86_64 with kernel
        >     2.6.22.14-72.fc6 (with exactly same packages installed
        in all server).
        >     The gluster version used is 1.3.8pre2 + 2.7.2glfs8 (both
        compiled
        >     locally). The underlying FS is reiserfs mounted with the
        follow
        >     options
        >     rw,noatime,nodiratime,notail. This filesystem has almost
        4 thousand
        >     files from 2k - 10Mb in size. We are using gluster to
        export this
        >     filesystem for all other webservers. Below the config
        file used by
        >     gluster server:
        >
        >     ### Export volume "brick" with the contents of
        "/home/export"
        >     directory.
        >     volume attachments-nl
        >       type storage/posix                   # POSIX FS translator
        >       option directory
        >     /C3Systems/data/domains/webmail.pop.com.br/attachments
        >     end-volume
        >
        >     volume attachments
        >       type features/posix-locks
        >       subvolumes attachments-nl
        >       option mandatory on
        >     end-volume
        >
        >
        >     ### Add network serving capability to above brick.
        >     volume server
        >       type protocol/server
        >       option transport-type tcp/server     # For TCP/IP
        transport
        >       option client-volume-filename
        >     /C3Systems/gluster/bin/etc/glusterfs/glusterfs-client.vol
        >       subvolumes attachments-nl attachments
        >       option auth.ip.attachments-nl.allow * # Allow access to
        >     "attachments-nl" volume
        >       option auth.ip.attachments.allow * # Allow access to
        >     "attachments" volume
        >     end-volume
        >
        >     The problem happen when the LB sent the post (the
        uploaded file)
        >     to one
        >     webserver and than the next post goes to other webserver
         that try to
        >     access the same file. When this happen, the other client
        got these
        >     messages:
        >
        >     PHP Warning:
> fopen(/C3Systems/data/domains/c3systems.com.br/attachments/27gBgFQSIiOLDEo7AvxlpsFkqZw9jdnZ):
        >     failed to open stream: File Not Found.
        >     PHP Warning:
> unlink(/C3Systems/data/domains/c3systems.com.br/attachments/5Dech7jNxjORZ2cZ9IAbR7kmgmgn2vTE):
        >     File Not Found.
        >
        >     LB is using RoundRobin to distribute the load between
        servers.
        >
        >     Below, you can find the gluster configuration file used
        by all
        >     clients:
        >
        >     ### file: client-volume.spec.sample
        >
        >     ##############################################
        >     ###  GlusterFS Client Volume Specification  ##
        >     ##############################################
        >
        >     #### CONFIG FILE RULES:
        >     ### "#" is comment character.
        >     ### - Config file is case sensitive
        >     ### - Options within a volume block can be in any order.
        >     ### - Spaces or tabs are used as delimitter within a line.
        >     ### - Each option should end within a line.
        >     ### - Missing or commented fields will assume default
        values.
        >     ### - Blank/commented lines are allowed.
        >     ### - Sub-volumes should already be defined above before
        referring.
        >
        >     ### Add client feature and attach to remote subvolume
        >     volume client
        >       type protocol/client
        >       option transport-type tcp/client     # for TCP/IP
        transport
        >     # option ib-verbs-work-request-send-size  1048576
        >     # option ib-verbs-work-request-send-count 16
        >     # option ib-verbs-work-request-recv-size  1048576
        >     # option ib-verbs-work-request-recv-count 16
        >     # option transport-type ib-sdp/client  # for Infiniband
        transport
        >     # option transport-type ib-verbs/client # for ib-verbs
        transport
        >       option remote-host 1.2.3.4 <http://1.2.3.4>
        <http://1.2.3.4>      # IP address of
        >     the remote brick
        >     # option remote-port 6996              # default server
        port is 6996
        >
        >     # option transport-timeout 30          # seconds to wait
        for a reply
        >                                            # from server for
        each request
        >       option remote-subvolume attachments  # name of the
        remote volume
        >     end-volume
        >
        >     ### Add readahead feature
        >     volume readahead
        >       type performance/read-ahead
        >       option page-size 1MB     # unit in bytes
        >       option page-count 2       # cache per file  =
        (page-count x
        >     page-size)
        >       subvolumes client
        >     end-volume
        >
        >     ### Add IO-Cache feature
        >     volume iocache
        >       type performance/io-cache
        >       option page-size 256KB
        >       option page-count 2
        >       subvolumes readahead
        >     end-volume
        >
        >     ### Add writeback feature
        >     #volume writeback
        >     #  type performance/write-behind
        >     #  option aggregate-size 1MB
        >     #  option flush-behind off
        >     #  subvolumes iocache
        >     #end-volume
        >
        >     When I do the test manually, everything goes fine. What
        I think is
        >     happening is that gluster isn´t having enough time to
        sync all clients
        >     before clients trying to access the files (those servers
        are very busy
        >     ones.....they receive millions of requests per day).
        >
        >     Is this configuration appropriate for this situation ? a
        bug ? a
        >     feature
        >     ;-) ? Is there any option like the sync used in NFS that
        I can use in
        >     order guarantee that when the file is write down, all
        the clients
        >     already
        >     have it ?
        >
        >     TIA,
        >
        >     Claudio Cuqui
        >
        >
        >
        >
        >
        >     _______________________________________________
        >     Gluster-devel mailing list
        >     Gluster-devel@xxxxxxxxxx
        <mailto:Gluster-devel@xxxxxxxxxx>
        <mailto:Gluster-devel@xxxxxxxxxx
        <mailto:Gluster-devel@xxxxxxxxxx>>
        >     http://lists.nongnu.org/mailman/listinfo/gluster-devel
        >
        >
        >
        >
        > --
        > If I traveled to the end of the rainbow
        > As Dame Fortune did intend,
        > Murphy would be there to tell me
        > The pot's at the other end.

        _______________________________________________
        Gluster-devel mailing list
        Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx>
        http://lists.nongnu.org/mailman/listinfo/gluster-devel




-- Amar Tumballi
    Gluster/GlusterFS Hacker
    [bulde on #gluster/irc.gnu.org]
    http://www.zresearch.com - Commoditizing Supercomputing and
Superstorage!



--
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux