On Mon, Nov 08, 2010 at 10:36:06AM -0500, Jeff Moyer wrote: > Dave Chinner <david@xxxxxxxxxxxxx> writes: > > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > To avoid concerns that a single list and lock tracking the unaligned > > IOs will not scale appropriately, create multiple lists and locks > > and chose them by hashing the unaligned block being zeroed. > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > --- > > fs/direct-io.c | 49 ++++++++++++++++++++++++++++++++++++------------- > > 1 files changed, 36 insertions(+), 13 deletions(-) > > > > diff --git a/fs/direct-io.c b/fs/direct-io.c > > index 1a69efd..353ac52 100644 > > --- a/fs/direct-io.c > > +++ b/fs/direct-io.c > > @@ -152,8 +152,28 @@ struct dio_zero_block { > > atomic_t ref; /* reference count */ > > }; > > > > -static DEFINE_SPINLOCK(dio_zero_block_lock); > > -static LIST_HEAD(dio_zero_block_list); > > +#define DIO_ZERO_BLOCK_NR 37LL > > I'm always curious to know how these numbers are derived. Why 37? It's a prime number large enough to give enough lists to minimise contention whilst providing decent distribution for 8 byte aligned addresses with low overhead. XFS uses the same sort of waitqueue hashing for global IO completion wait queues used by truncation and inode eviction (see xfs_ioend_wait()). Seemed reasonable (and simple!) just to copy that design pattern for another global IO completion wait queue.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html