Further to my message below, I'm getting a lot (thousands?) of errors like this in the glusterfsd server log: 2008-07-15 14:19:41 E [posix.c:1984:posix_setdents] brick-ns: Error creating file /data/export-ns/mydata/myfile.txt with mode (0100644) Nothing relevant in syslog or the client logs I don't think. skimber wrote: > > Thanks for the responses. > > It turned out that the issue was with the disk in one of the clients. > Using the other client machine it appears to be working fine, although it > does seem very slow. > > An ls -l on a directory containing about 150 files took > 5 mins and the > rsync will only go at a rate of roughly one file every 3 seconds, average > size 30 to 50Kb. > > If I do the rsync to the client's local HD instead it's many files per > second. > > I have tried adding the following to the end of the client config from my > original post but it doesn't appear to have made any noticable difference: > > volume readahead > type performance/read-ahead > option page-size 128kB # 256KB is the default option > option page-count 4 # 2 is default option > option force-atime-update off # default is off > subvolumes unify > end-volume > > volume writebehind > type performance/write-behind > option aggregate-size 1MB # default is 0bytes > option flush-behind on # default is 'off' > subvolumes readahead > end-volume > > volume io-cache > type performance/io-cache > option cache-size 64MB # default is 32MB > option page-size 1MB # 128KB is default option > option priority *:0 # default is '*:0' > option force-revalidate-timeout 2 # default is 1 > subvolumes writebehind > end-volume > > > Can anyone tell me if I have done this correctly and/or suggest anything > else I can do to fix this performance issue? > > Thanks > > Simon > -- View this message in context: http://www.nabble.com/Rsync-failure-problem-tp18420195p18466242.html Sent from the gluster-devel mailing list archive at Nabble.com.