On 09/18/13 18:24, Dave Chinner wrote:
On Wed, Sep 18, 2013 at 04:48:45PM -0500, Mark Tinguely wrote:
On 09/08/13 20:33, Dave Chinner wrote:
From: Dave Chinner<dchinner@xxxxxxxxxx>
CPU overhead of buffer lookups dominate most metadata intensive
workloads. The thing is, most such workloads are hitting a
relatively small number of buffers repeatedly, and so caching
recently hit buffers is a good idea.
Add a hashed lookaside buffer that records the recent buffer
lookup successes and is searched first before doing a rb-tree
lookup. If we get a hit, we avoid the expensive rbtree lookup and
greatly reduce the overhead of the lookup. If we get a cache miss,
then we've added an extra CPU cacheline miss into the lookup.
....
Low cost, possible higher return. Idea looks good to me.
What happens in xfs_buf_get_map() when we lose the xfs_buf_find() race?
What race is that?
I am thinking the two overlapping callers to xfs_buf_find() protected by
the two calls to xfs_buf_find(). But my mistake was where the lookaside
gets added. It is added correctly on the second call to xfs_buf_find()
where it make sure that another find did not beat this find. Yes, no
entry needs to be removed.
--Mark.
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs