Rodney, this was fixed in mainline--2.5 a couple of days back. Please await the next pre-release (or the latest revision from the repository should have the fix). avati 2008/4/2, Rodney McDuff <mcduff@xxxxxxxxxxxxx>: > > Hi All > I have 4 sets of 3 deep AFRs unified together with one of the 4 AFRs > as a namespace. (See server and client config attached). The servers and > client are glusterfs-1.3.8pre4 running on Mac OSX 10.5.2. > > The client volume mounts ok and I can read and write files to it. > Looking at the server volumes I see files appear in the namespace brick > and storage bricks as expected for the ALU scheduler. > > However when I do any fops that fetches fstats (is 'ls -al') I see the > following errors (only from the first brick of each AFR). I don't see > them from subsequent 'ls -al' so I suppose glusterfs is caching these > fstats. When I just do a plain 'ls' (pre-cached) I see no such error. > > > > Apr 2 12:09:26 glusterfs[59032]: E > [client-protocol.c:4605:client_checksum_cbk] ilc1-lectern: no proper > reply from server, returning EINVAL > Apr 2 12:09:26 glusterfs[59032]: E > [client-protocol.c:4605:client_checksum_cbk] ilc1-001: no proper reply > from server, returning EINVAL > Apr 2 12:09:26 glusterfs[59032]: E > [client-protocol.c:4605:client_checksum_cbk] ilc1-003: no proper reply > from server, returning EINVAL > Apr 2 12:09:26 glusterfs[59032]: E > [client-protocol.c:4605:client_checksum_cbk] ilc1-002: no proper reply > from server, returning EINVAL > > > Also when I try to extract a large tar file (lots of files) in the > gluster filesystem it hangs for a while after just writing a few files. > It needs a remount top become operational again. > > > -- > Dr. Rodney G. McDuff |Ex ignorantia ad sapientiam > Manager, Strategic Technologies Group| Ex luce ad tenebras > Information Technology Services | > The University of Queensland | > EMAIL: mcduff@xxxxxxxxxxxxx | > TELEPHONE: +61 7 3365 8220 | > > > ------------- namespace server brick > volume brick > type storage/posix > option directory /var/tmp/export > end-volume > > volume server > type protocol/server > subvolumes brick > option transport-type tcp/server # For TCP/IP transport > option client-volume-filename /opt/local/etc/glusterfs/client.vol > option auth.ip.brick.allow * > end-volume > > ------- storage server brick > > volume brick > type storage/posix > option directory /var/tmp/export > end-volume > > volume pbrick > type features/posix-locks > option mandatory on # enables mandatory locking on all files > subvolumes brick > end-volume > > > volume server > type protocol/server > subvolumes pbrick > option transport-type tcp/server # For TCP/IP transport > option client-volume-filename /opt/local/etc/glusterfs/client.vol > option auth.ip.pbrick.allow * > end-volume > > > > > --------- client config > > > > > > > volume ilc1-lectern > type protocol/client > option transport-type tcp/client > option remote-host ilc1-lectern > option remote-subvolume brick > end-volume > > volume ilc3-lectern > type protocol/client > option transport-type tcp/client > option remote-host ilc3-lectern > option remote-subvolume brick > end-volume > > volume ilc4-lectern > type protocol/client > option transport-type tcp/client > option remote-host ilc4-lectern > option remote-subvolume brick > end-volume > > volume AFR000 > type cluster/afr > subvolumes ilc1-lectern ilc3-lectern ilc4-lectern > end-volume > > volume ilc1-001 > type protocol/client > option transport-type tcp/client > option remote-host ilc1-001 > option remote-subvolume pbrick > end-volume > > volume ilc3-101 > type protocol/client > option transport-type tcp/client > option remote-host ilc3-101 > option remote-subvolume pbrick > end-volume > > volume ilc4-151 > type protocol/client > option transport-type tcp/client > option remote-host ilc4-151 > option remote-subvolume pbrick > end-volume > > volume AFR001 > type cluster/afr > subvolumes ilc1-001 ilc3-101 ilc4-151 > end-volume > > volume ilc1-002 > type protocol/client > option transport-type tcp/client > option remote-host ilc1-002 > option remote-subvolume pbrick > end-volume > > volume ilc3-102 > type protocol/client > option transport-type tcp/client > option remote-host ilc3-102 > option remote-subvolume pbrick > end-volume > > volume ilc4-152 > type protocol/client > option transport-type tcp/client > option remote-host ilc4-152 > option remote-subvolume pbrick > end-volume > > volume AFR002 > type cluster/afr > subvolumes ilc1-002 ilc3-102 ilc4-152 > end-volume > > volume ilc1-003 > type protocol/client > option transport-type tcp/client > option remote-host ilc1-003 > option remote-subvolume pbrick > end-volume > > volume ilc3-103 > type protocol/client > option transport-type tcp/client > option remote-host ilc3-103 > option remote-subvolume pbrick > end-volume > > volume ilc4-153 > type protocol/client > option transport-type tcp/client > option remote-host ilc4-153 > option remote-subvolume pbrick > end-volume > > volume AFR003 > type cluster/afr > subvolumes ilc1-003 ilc3-103 ilc4-153 > end-volume > > volume bricks > type cluster/unify > option namespace AFR000 # this will not be storage child of unify. > subvolumes AFR001 AFR002 AFR003 > ### ** ALU Scheduler Option ** > option scheduler alu > option alu.limits.min-free-disk 5% #% > option alu.limits.max-open-files 10000 > option alu.orderdisk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage > option alu.disk-usage.entry-threshold 2GB > option alu.disk-usage.exit-threshold 128MB > option alu.open-files-usage.entry-threshold 1024 > option alu.open-files-usage.exit-threshold 32 > option alu.read-usage.entry-threshold 20 #% > option alu.read-usage.exit-threshold 4 #% > option alu.write-usage.entry-threshold 20 #% > option alu.write-usage.exit-threshold 4 #% > option alu.disk-speed-usage.entry-threshold 0 # DO NOT SET IT. SPEED IS > CONSTANT!!!. > option alu.disk-speed-usage.exit-threshold 0 # DO NOT SET IT. SPEED IS > CONSTANT!!!. > option alu.stat-refresh.interval 10sec > option alu.stat-refresh.num-file-create 10 > ### ** Random Scheduler ** > # option scheduler random > ### ** NUFA Scheduler ** > # option scheduler nufa > # option nufa.local-volume-name posix1 > ### ** Round Robin (RR) Scheduler ** > # option scheduler rr > # option rr.limits.min-free-disk 5% #% > end-volume > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > -- If I traveled to the end of the rainbow As Dame Fortune did intend, Murphy would be there to tell me The pot's at the other end.