Hi, My conf files for client and server. You can reproduce this problem with qemu and qcow2 format . one big file ( 2G) on server. Regards, Nicolas ########################################" client spec file volume client1 type protocol/client option transport-type tcp/client option remote-host 10.98.98.1 option remote-subvolume brick end-volume volume booster type performance/booster subvolumes client1 end-volume volume readahead type performance/read-ahead option page-size 128KB option page-count 64 subvolumes client1 end-volume volume iothreads type performance/io-threads option thread-count 4 subvolumes readahead end-volume volume io-cache type performance/io-cache option cache-size 512MB # default is 32MB option page-size 256KB #128KB is default option option force-revalidate-timeout 7200 # default is 1 subvolumes iothreads end-volume volume writebehind type performance/write-behind option aggregate-size 512KB # default is 0bytes option flush-behind on # default is 'off' subvolumes io-cache end-volume ####################################################### server spec file volume brick1 type storage/posix option directory /mnt/disks/export end-volume volume brick type performance/io-threads option thread-count 4 option cache-size 256MB subvolumes brick1 end-volume volume readahead-brick type performance/read-ahead option page-size 1M option page-count 32 subvolumes brick end-volume volume server option window-size 2097152 type protocol/server subvolumes readahead-brick option transport-type tcp/server # For TCP/IP transport option client-volume-filename /etc/glusterfs/glusterfs-client.vol option auth.ip.brick.allow * end-volume On Sat, Jun 7, 2008 at 8:03 AM, Krishna Srinivas <krishna@xxxxxxxxxxxxx> wrote: > Nicolas, > > Just for the records can you give your spec files? > How many dirs and files do you have (to get an > idea to reproduce the problem in our setup) > > Krishna > > On Fri, Jun 6, 2008 at 10:05 PM, nicolas prochazka > <prochazka.nicolas@xxxxxxxxx> wrote: >> Hi, >> I'm using glusterfs with sparse files, read is ok and work fine but >> it seems thats write does not work. Glusterfsd takes a lot of cpu ressources >> and write is very very slow. >> I'm using simple configuration for server and client ( as an nfs >> modele with performance translator) >> Is it a known bug ? >> >> Regards, >> Nicolas Prochazka. >> >> >> _______________________________________________ >> Gluster-devel mailing list >> Gluster-devel@xxxxxxxxxx >> http://lists.nongnu.org/mailman/listinfo/gluster-devel >> >