On Sat, 2012-07-07 at 14:26 -0400, Rik van Riel wrote: > > +/* > > + * Assumes symmetric NUMA -- that is, each node is of equal size. > > + */ > > +static void set_max_mem_load(unsigned long load) > > +{ > > + unsigned long old_load; > > + > > + spin_lock(&max_mem_load.lock); > > + old_load = max_mem_load.load; > > + if (!old_load) > > + old_load = load; > > + max_mem_load.load = (old_load + load) >> 1; > > + spin_unlock(&max_mem_load.lock); > > +} > > The above in your patch kind of conflicts with this bit > from patch 6/26: Yeah,.. its pretty broken. Its also effectively disabled, but yeah. > Looking at how the memory load code is used, I wonder > if it would make sense to count "zone size - free - inactive > file" pages instead? Something like that, although I guess we'd want a sum over all zones in a node for that. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href