Re: increasing ext3 or io responsiveness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eric,

I've also noticed that tweaking bdflush can have a big difference on
performance.  I ran a 1,2,4,8,16, and 32 thread Postmark benchmark on a
variety of encrypting file systems, and the four thread case was
significantly slower than 2 and 8 (also 3 and 5), where I was a
expecting a nice linear progression.  The amount of I/O being generated
by these four threads was "just right" to make this oddity show up.

After quite a bit of thinking, I started to play with the bdflush
parameters.  It turned out that the default setting of 60% for
nfract_sync was causing me problems, so I changed it to 90% and the
anomalous behavior went away.

I would try tweaking nfract_sync (the seventh number), without bumping
up nfract (the first number) so much.  This way bdflush will get kicked
in until the number of free buffers hits nfract_stop (the eighth
number), but hopefully won't cause your process to get hung up by
synchronously flushing so many buffers.

Charles

BTW, My performance comparison of crypto file systems, including this
bdflush behavior, is described in this paper:
http://www.fsl.cs.sunysb.edu/docs/nc-perf/perf.pdf

On Thu, 2004-02-05 at 12:00, Eric Wood wrote: 
> Our Invoice posting routine (intensive harddrive io) freezes every few
> seconds to flush the cache.  Reading this:
> 
> https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html
> 
> 
> I decided to try:
> 
> # elvtune -r 2048 -w 131072 /dev/sda
> # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush
> # run_post_routine
> # elvtune -r 128 -w 512 /dev/sda
> # echo "30 500 0 0 500 3000 60 20 0" >/proc/sys/vm/bdflush
> # sync
> 
> I like it, but I think that's way too lax and risky - the whole post routine
> never wrote to disk until I sync'd!  But, is there a setting that would
> ensure reliable constant i/o so that my post process is pretty much all
> flushed in real time?  Is constantly changing the bdflush parameters before
> the type of job I'm about to run a bad thing?  I noticed that changing back
> to the "30 500 0 0 500 3000 60 20 0" default doesn't flush to queue, I still
> had to "sync".
> 
> -Eric Wood
> 
> 
> _______________________________________________
> 
> Ext3-users@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/ext3-users


_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux