On Mon, Feb 18, 2013 at 06:57:07PM -0500, Brian Foster wrote: > The updated speculative preallocation algorithm becomes less > effective in situations with a high number of concurrent, > sequential writers. In running 32 sequential writers on a system > with 32GB RAM, preallocs become fixed at a value of around 128MB. > Update the heuristic to base the size of the prealloc on double > the size of the preceding extent. This preserves the original > aggressive speculative preallocation behavior at a slight cost of > increasing the size of preallocated data regions following holes of > sparse files. > > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> > --- You probably want to mention that it is the writeback bandwidth slicing (for fairness across all dirty inodes) that is resulting in the pattern of allocation that you saw. Hence different machines with different amounts of RAM and write throughput will therefore see different results. > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c > index 912d83d..45a382d 100644 > --- a/fs/xfs/xfs_iomap.c > +++ b/fs/xfs/xfs_iomap.c > @@ -362,7 +362,7 @@ xfs_iomap_eof_prealloc_initial_size( > if (imap[0].br_startblock == HOLESTARTBLOCK) > return 0; > if (imap[0].br_blockcount <= (MAXEXTLEN >> 1)) > - return imap[0].br_blockcount; > + return imap[0].br_blockcount << 1; > return XFS_B_TO_FSB(mp, offset); > } Works for me. Thanks Brain. Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs