Digging a big more I found the following. When I do ls on the mounted directory I get the following readdir on the server side: 2009-01-16 11:56:09 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=0; size=4096 2009-01-16 11:56:16 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=264; size=4096 2009-01-16 11:56:22 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=527; size=4096 2009-01-16 11:56:35 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=788; size=4096 2009-01-16 11:56:37 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=1062; size=4096 2009-01-16 11:56:37 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=1366; size=4096 2009-01-16 11:56:38 D [server-protocol.c:5000:server_readdir] ns: READDIR 'fd=0; offset=2147483647; size=4096 That last readdir seems extremely suspicious. I guess for some strange reason the client jumps to a very large offset. I will see if find more on the client side. On Fri, Jan 16, 2009 at 10:06, Anand Avati <avati at zresearch.com> wrote: >> I *think* (and I'm sure Anand will correct me if I'm wrong), in the case of >> a single brick, gluster doesn't care much about it's extended attributes, >> but once another brick/node is involved (in unify or replicate), then the >> extended attributes become important. >> >> pre-existing data is generally without Xattr's and so this causes confusion. >> My guess is, adding the extended attributes (which happens when you copy the >> file into the gluster mountpoint) would solve the problem. > > This is generally true, but in this situation he is using unify (which > does not use xattrs on the backend) and most of the other cases, if > the xattr is missing, the appropriate self heal will atleast try to > set the most sensible xattr values when the file is first looked up. > > Avati >