NFS problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0

Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
nfs.port: 2049

I'm running this command : 

I get those error : 
tar: ./uvs00: owner not changed
tar: could not stat ./uvs00/log/0906uvsGESEC.log
tar: ./uvs00: group not changed
tar: could not stat ./uvs00/log/0306uvsGESEC.log
tar: ./uvs00/log: Input/output error
cannot change back?: Unknown error 526
tar: ./uvs00/log: owner not changed
tar: ./uvs00/log: group not changed
tar: tape blocksize error

And then I tried to "ls" in gluster mount : 
/bin/ls: .: Input/output error

only way is to restart the volume


Here is the logfile in Debug mod : 


Given volfile:
+------------------------------------------------------------------------------+
  1: volume poolsave-client-0
  2:     type protocol/client
  3:     option remote-host ylal2950
  4:     option remote-subvolume /soft/gluster-data
  5:     option transport-type tcp
  6:     option ping-timeout 20
  7: end-volume
  8: 
  9: volume poolsave-client-1
 10:     type protocol/client
 11:     option remote-host ylal2960
 12:     option remote-subvolume /soft/gluster-data
 13:     option transport-type tcp
 14:     option ping-timeout 20
 15: end-volume
 16: 
 17: volume poolsave-replicate-0
 18:     type cluster/replicate
 19:     subvolumes poolsave-client-0 poolsave-client-1
 20: end-volume
 21: 
 22: volume poolsave-write-behind
 23:     type performance/write-behind
 24:     subvolumes poolsave-replicate-0
 25: end-volume
 26: 
 27: volume poolsave-read-ahead
 28:     type performance/read-ahead
 29:     subvolumes poolsave-write-behind
 30: end-volume
 31: 
 32: volume poolsave-io-cache
 33:     type performance/io-cache
 34:     option cache-size 512MB
 35:     subvolumes poolsave-read-ahead
 36: end-volume
 37: 
 38: volume poolsave-quick-read
 39:     type performance/quick-read
 40:     option cache-size 512MB
 41:     subvolumes poolsave-io-cache
 42: end-volume
 43: 
 44: volume poolsave-stat-prefetch
 45:     type performance/stat-prefetch
 46:     subvolumes poolsave-quick-read
 47: end-volume
 48: 
 49: volume poolsave
 50:     type debug/io-stats
 51:     option latency-measurement off
 52:     option count-fop-hits off
 53:     subvolumes poolsave-stat-prefetch
 54: end-volume
 55: 
 56: volume nfs-server
 57:     type nfs/server
 58:     option nfs.dynamic-volumes on
 59:     option rpc-auth.addr.poolsave.allow *
 60:     option nfs3.poolsave.volume-id 71e0dabf-4620-4b6d-b138-3266096b93b6
 61:     option nfs.port 2049
 62:     subvolumes poolsave
 63: end-volume

+------------------------------------------------------------------------------+
[2011-06-09 16:52:23.709018] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-0: changing port to 24014 (from 0)
[2011-06-09 16:52:23.709211] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-1: changing port to 24011 (from 0)
[2011-06-09 16:52:27.716417] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-09 16:52:27.716650] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-0: Connected to 10.68.217.85:24014, attached to remote volume '/soft/gluster-data'.
[2011-06-09 16:52:27.716679] I [afr-common.c:2514:afr_notify] 0-poolsave-replicate-0: Subvolume 'poolsave-client-0' came back up; going online.
[2011-06-09 16:52:27.717020] I [afr-common.c:836:afr_fresh_lookup_cbk] 0-poolsave-replicate-0: added root inode
[2011-06-09 16:52:27.729719] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-09 16:52:27.730014] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-1: Connected to 10.68.217.86:24011, attached to remote volume '/soft/gluster-data'.
[2011-06-09 17:01:35.537084] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.546601] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.569755] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty
[2011-06-09 17:01:35.569881] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty
[2011-06-09 17:01:35.579674] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.587907] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.612918] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.645357] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.660873] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty
[2011-06-09 17:01:35.660955] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty
[2011-06-09 17:01:35.665933] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty
[2011-06-09 17:01:35.666057] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty
[2011-06-09 17:01:35.671199] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty
[2011-06-09 17:01:35.671241] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty
[2011-06-09 17:01:35.680959] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.715633] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.732798] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Permission denied
[2011-06-09 17:01:35.733044] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Permission denied
[2011-06-09 17:01:35.750009] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.784610] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.68.217.85:24014)
[2011-06-09 17:01:35.784745] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752080
[2011-06-09 17:01:35.784770] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.784811] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.752414
[2011-06-09 17:01:35.784828] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.784875] I [client.c:1883:client_rpc_notify] 0-poolsave-client-0: disconnected
[2011-06-09 17:01:35.785400] W [socket.c:204:__socket_rwv] 0-poolsave-client-1: readv failed (Connection reset by peer)
[2011-06-09 17:01:35.785435] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-1: reading from socket failed. Error (Connection reset by peer), peer (10.68.217.86:24011)
[2011-06-09 17:01:35.785496] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752089
[2011-06-09 17:01:35.785516] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.785542] W [client3_1-fops.c:4379:client3_1_xattrop] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected
[2011-06-09 17:01:35.817662] I [socket.c:2272:socket_submit_request] 0-poolsave-client-1: not connected (priv->connected = 0)
[2011-06-09 17:01:35.817698] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x576x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport (poolsave-client-1)
[2011-06-09 17:01:35.817721] W [client3_1-fops.c:4735:client3_1_inodelk] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected
[2011-06-09 17:01:35.817744] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x577x Program: GlusterFS 3.1, ProgVers: 310, Proc: 29) to rpc-transport (poolsave-client-1)
[2011-06-09 17:01:35.817780] I [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.817897] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.784870
[2011-06-09 17:01:35.817918] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 0-poolsave-client-1: disconnected
[2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 0-poolsave-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-1: connection to 10.68.217.86:24011 failed (Connection refused)
[2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 0-poolsave-replicate-0: no subvolumes up
[2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up
[2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up
[2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00: no child is up
[2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-0: connection to 10.68.217.85:24014 failed (Connection refused)
[2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up



> Message: 7
> Date: Thu, 9 Jun 2011 12:56:39 +0530
> From: Shehjar Tikoo <shehjart at gluster.com>
> Subject: Re: Glusterfs 3.2.0 NFS Problem
> To: J?rgen Winkler <juergen.winkler at xidras.com>
> Cc: gluster-users at gluster.org
> Message-ID: <4DF075AF.3040509 at gluster.com>
> Content-Type: text/plain; charset="us-ascii"; format=flowed
> 
> This can happen if all your servers were unreachable for a few seconds. The 
> situation must have rectified during the restart. We could confirm if you 
> change the log level on nfs to DEBUG and send us the log.
> 
> Thanks
> -Shehjar
> 
> Ju"rgen Winkler wrote:
> > Hi,
> > 
> > i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our 
> > Servers are loosing the Mount but when you restart the Volume on the 
> > Server it works again without a remount.
> > 
> > On the server i noticed this entries in the Glusterfs/Nfs  log-file when 
> > the mount on the Client becomes unavailable  :
> > 
> > [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > [2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat] 
> > 0-ksc-replicate-0: /: no child is up
> > 
> > 
> > Thx for the help
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> End of Gluster-users Digest, Vol 38, Issue 14
> *********************************************
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20110609/4ff7aa4c/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux