> So you have 5 servers, each storing a portion of a stripe. ?You get a 5x > change in allocation? ?This sounds less like an xfs issue and more like a > gluster allocation issue. ?I've not looked lately at the stripe code, but it > may allocate the same space on each node, using the access pattern for > performance. Joe J looked at the output of xfs_bmap -v of a 2GB file on that striped filesystem and it was written correctly but it wasn't reporting the correct filesize with du or df . In any case I switched to ext4 and this "bug" is gone. However, another strange issue I'm having is this : Create a directory, go into it, and write a 100MB file (my wrapper that does dd if=/dev/zero of=someFile) : [root at gluster1 pirstripe]# mkdir tmp && cd tmp && ~me/nfsSpeedTest/nfsSpeedTest -s 100m -y -r -d gluster1: Write test (dd): 44.300 MB/s 354.398 mbps 2.257 seconds [root at gluster1 tmp]# stat nfsSpeedTest-71364644793634600136 File: `nfsSpeedTest-71364644793634600136' Size: 104857600 Blocks: 204840 IO Block: 131072 regular file Device: 1eh/30d Inode: 18446744070399556490 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-02-24 15:26:29.625841194 -0600 Modify: 2012-02-24 15:26:31.861762336 -0600 Change: 2012-02-24 15:26:31.861762336 -0600 [root at gluster1 tmp]# du -sh nfsSpeedTest-71364644793634600136 101M nfsSpeedTest-71364644793634600136 [root at gluster1 tmp]# du -sh --apparent-size nfsSpeedTest-71364644793634600136 100M nfsSpeedTest-71364644793634600136 So far good. [root at gluster1 tmp]# cd .. [root at gluster1 pirstripe]# du -sh tmp/ 21M tmp/ That was unexpected! That's the filesize / stripeSize (5) . [root at gluster1 pirstripe]# du -sh --apparent-size tmp/ 101M tmp/ Annoying workaround. Why is it doing that?