Andreas Dilger wrote: > > On Feb 08, 2002 12:42 -0500, Bill McGonigle wrote: > > I'm trying to eek every last bit of performance out of a sync NFS server > > using ext3. > > > > I created a 400MB journal, for my data=journal disk. When copying a > > 200MB file to the machine, by the sound of the disk, it flushed data to > > disk about 4 times, so, about every 50MB. > > I think there is also a 5 second flush interval. IIRC, there is a > parameter which can be tuned, but I don't know the exact mechanism > (compile time, mount option, /proc entry). > > Hmm, looking further, it seems it is a compile-time option in > fs/jbd/journal.c:journal_init_common.c where it sets the > journal_commit_interval. > > I thought Andrew at least had a patch to make the journal_commit_interval > match the flush interval for bdflush. Actually, I just set the commit interval to 100000000. So commits are initiated by kupdate instead of kjournald. So you can then tune the commit interval with /proc/sys/vm/bdflush (there's a userspace app which sets bdflush parameters too, but I can't remember its name). In Bill's testing, commits will most likely be forced by exhaustion of journal space, rather than kjournald timeout. This is determined by journal->j_max_transaction_buffers, which is initialised in journal_reset(): journal->j_max_transaction_buffers = journal->j_maxlen / 4; probably this could be changed to journal->j_maxlen / 2 with no ill effect. I haven't tried it. The other (complimentary) option is to simply hack mke2fs so that it permits larger journals. In e2fsprogs's misc/util.c:figure_journal_size(): - if (j_blocks < 1024 || j_blocks > 102400) { + if (j_blocks < 1024 || j_blocks >= fs->super->s_free_blocks_count) { -