Hello, I am evaluating glusterfs and have seen some strange behavior with remove. I have gluster/2.0.0rc4 setup on 10 linux nodes connected with GigE. The config is Nufa/fuse with one storage brick per server, as seen in the attached nufa.vol config file, which I use for both clients and servers. My experiment is to launch 10 parallel writers, each of whom writes 32GiB worth of data in small files (2MB) to a shared gluster-fuse mounted filesystem. The files are named uniquely per client, so each file is only written once. This worked well, and I am seeing performance close to that of native disk, even with 8-writers per node. However when I do a parallel "rm -rf writedir/" on the 10 nodes, where writedir is the directory written in by the parallel writers described above, I see strange effects. There are 69,000 UNLINK errors in the glusterfsd.log of one server, in the form shown below. This alone is not surprising as the operation is ocurring in parallel. However the remove took much longer than expected, 92min, and more surprisingly the rm command exited 0 but files remained in the writedir! I ran rm -rf writedir from a single client, and it too exited 0 but left the writedir non-empty. Is this expected? Thanks, Federico --From glusterfsd.log-- 2009-05-04 11:35:15 E [fuse-bridge.c:964:fuse_unlink_cbk] glusterfs-fuse: 5764889: UNLINK() /write.2MB.runid1.p1/5 => -1 (No such file or directory) 2009-05-04 11:35:15 E [dht-common.c:1294:dht_err_cbk] nufa: subvolume drdan0192 returned -1 (No such file or directory) 2009-05-04 11:35:15 E [fuse-bridge.c:964:fuse_unlink_cbk] glusterfs-fuse: 5764894: UNLINK() /write.2MB.runid1.p1/51 => -1 (No such file or directory) --end-- <<nufa.vol>> -------------- next part -------------- A non-text attachment was scrubbed... Name: nufa.vol Type: application/octet-stream Size: 4960 bytes Desc: nufa.vol URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090507/0fc0886e/attachment.obj>