Hi, what patch are you using? with glusterfs--mainline--3.0--patch-329 and a basics setup of write-behind over protocol/client, glusterfs cpu usage never went above 65% in my tests. Can you please confirm whether the problem persists in patch-329? regards, On Thu, Aug 28, 2008 at 6:27 AM, Dai, Manhong <daimh at umich.edu> wrote: > Hi, > > client.vol is > > volume unify-brick > type cluster/unify > option scheduler rr # round robin > option namespace muskie-ns > # subvolumes muskie-brick pike1-brick pike2-brick pike3-brick > subvolumes muskie-brick pike1-brick pike3-brick > end-volume > > volume wb > type performance/write-behind > option aggregate-size 1MB > option flush-behind on > subvolume > client.vol is > > volume unify-brick > type cluster/unify > option scheduler rr # round robin > option namespace muskie-ns > # subvolumes muskie-brick pike1-brick pike2-brick pike3-brick > subvolumes muskie-brick pike1-brick pike3-brick > end-volume > > volume wb > type performance/write-behind > option aggregate-size 1MB > option flush-behind on > subvolumes unify-brick > end-volume > > > yes abcdefghijklmn | while read l; do echo $l; done > a > > would cause glusterfs process 100% busy and files system hang when the > output size is around the aggregate-size. > > > removing write-behind translator would get rid of this problem. > > > s unify-brick > end-volume > > > command "yes abcdefghijklmn | while read l; do echo $l; done > a" would > cause glusterfs process 100% busy and files system hang when the output size > is around the aggregate-size. removing write-behind translator would get rid > of this problem. > > > > > Best, > Manhong > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > -- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20080828/ee44291c/attachment.htm