On 11/27/2012 03:14 PM, Dave Chinner wrote: > From: Dave Chinner <dchinner@xxxxxxxxxx> > > Now that we have an LRU list API, we can start to enhance the > implementation. This splits the single LRU list into per-node lists > and locks to enhance scalability. Items are placed on lists > according to the node the memory belongs to. To make scanning the > lists efficient, also track whether the per-node lists have entries > in them in a active nodemask. > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > --- > include/linux/list_lru.h | 14 ++-- > lib/list_lru.c | 160 +++++++++++++++++++++++++++++++++++----------- > 2 files changed, 129 insertions(+), 45 deletions(-) > > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h > index 3423949..b0e3ba2 100644 > --- a/include/linux/list_lru.h > +++ b/include/linux/list_lru.h > @@ -8,21 +8,23 @@ > #define _LRU_LIST_H 0 > > #include <linux/list.h> > +#include <linux/nodemask.h> > > -struct list_lru { > +struct list_lru_node { > spinlock_t lock; > struct list_head list; > long nr_items; > +} ____cacheline_aligned_in_smp; > + > +struct list_lru { > + struct list_lru_node node[MAX_NUMNODES]; > + nodemask_t active_nodes; > }; > MAX_NUMNODES will default to 1 << 9, if I'm not mistaken. Your list_lru_node seems to be around 32 bytes on 64-bit systems (128 with debug). So we're talking about 16k per lru. The superblocks only, are present by the dozens even in a small system, and I believe the whole goal of this API is to get more users to switch to it. This can easily use up a respectable bunch of megs. Isn't it a bit too much ? I am wondering if we can't do better in here and at least allocate+grow according to the actual number of nodes. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs