Hi list, we're evaluating glusterfs as a storage solution for our Xen cluster. We want to use it to store rootfs images of virtual machines and be able to use advanced features like live migration. Unfortunately we encountered some problems while trying to use xfs on those images (ext3 is working just fine, but we really would like to use xfs). When trying to create xfs on the image stored on glusterfs we get: # mkfs.xfs tst.img mkfs.xfs: pwrite64 failed: Invalid argument server debug output: 2008-08-15 15:38:03 D [inode.c:367:__active_inode] brick/inode: activating inode(268640449), lru=1/1024 2008-08-15 15:38:04 E [posix.c:1212:posix_writev] brick: O_DIRECT: offset is Invalid client: 2008-08-15 15:38:04 D [fuse-bridge.c:1604:fuse_writev_cbk] glusterfs-fuse: 524797: WRITE => 512/512,939524096/1073741824 2008-08-15 15:38:04 D [fuse-bridge.c:1641:fuse_write] glusterfs-fuse: 524798: WRITE (0x2aaaab200a00, size=512, offset=402653184) 2008-08-15 15:38:04 D [fuse-bridge.c:1604:fuse_writev_cbk] glusterfs-fuse: 524798: WRITE => 512/512,402653184/1073741824 2008-08-15 15:38:04 D [fuse-bridge.c:1641:fuse_write] glusterfs-fuse: 524799: WRITE (0x2aaaab200a00, size=512, offset=134218240) 2008-08-15 15:38:04 E [fuse-bridge.c:1609:fuse_writev_cbk] glusterfs-fuse: 524799: WRITE => -1 (22) 2008-08-15 15:38:04 D [fuse-bridge.c:1665:fuse_flush] glusterfs-fuse: 524800: FLUSH 0x2aaaab200a00 2008-08-15 15:38:04 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse: 524800: (16) ERR => 0 2008-08-15 15:38:04 D [fuse-bridge.c:1692:fuse_release] glusterfs-fuse: 524801: CLOSE 0x2aaaab200a00 2008-08-15 15:38:04 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse: 524801: (17) ERR => 0 server spec: volume brick type storage/posix option directory /mnt/export/test end-volume volume server type protocol/server option transport-type tcp/server # For TCP/IP transport option auth.ip.brick.allow * subvolumes brick end-volume client spec: volume remote1 type protocol/client option transport-type tcp/client option remote-host 192.168.211.2 option remote-subvolume brick end-volume server was started with: glusterfsd -f glusterfs-server-simple.vol --no-daemon --log-file=/dev/stdout --log-level=DEBUG and client: glusterfs -f glusterfs-client-simple.vol --direct-io-mode=DISABLE --no-daemon --log-file=/dev/stdout --log-level=DEBUG /mnt/glusterfs/ (we use --direct-io-mode=DISABLE as suggested in: http://www.gluster.org/docs/index.php/Technical_FAQ#Loop_mounting_image_files_stored_in_glusterFS_file_system) Server and client was on the same machine running Debian etch and glusterfs 1.3.9 built on Jul 11 2008 15:10:51 Repository revision: glusterfs--mainline--2.5--patch-770 Thanks in advance for any help with this problem. regards Notch