On Mon, Jan 21, 2013 at 08:08:53PM +0400, Glauber Costa wrote: > On 11/28/2012 03:14 AM, Dave Chinner wrote: > > [PATCH 09/19] list_lru: per-node list infrastructure > > > > This makes the generic LRU list much more scalable by changing it to > > a {list,lock,count} tuple per node. There are no external API > > changes to this changeover, so is transparent to current users. > > > > [PATCH 10/19] shrinker: add node awareness > > [PATCH 11/19] fs: convert inode and dentry shrinking to be node > > > > Adds a nodemask to the struct shrink_control for callers of > > shrink_slab to set appropriately for their reclaim context. This > > nodemask is then passed by the inode and dentry cache reclaim code > > to the generic LRU list code to implement node aware shrinking. > > I have a follow up question that popped up from a discussion between me > and my very American friend Johnny Wheeler, also known as Johannes > Weiner (CC'd). I actually remember we discussing this, but don't fully > remember the outcome. And since I can't find it anywhere, it must have > been in a media other than e-mail. So I thought it would do no harm in > at least documenting it... > > Why are we doing this per-node, instead of per-zone? > > It seems to me that the goal is to collapse all zones of a node into a > single list, but since the number of zones is not terribly larger than > the number of nodes, and zones is where the pressure comes from, what do > we really gain from this? The number is quite a bit higher - there are platforms with 5 zones to a node. The reality is, though, for most platforms slab allocations come from a single zone - they never come from ZONE_DMA, ZONE_HIGHMEM or ZONE_MOVEABLE, so there is there is no good reason for having cache LRUs for these zones. So, two zones at most. And then there's the complexity issue - it's simple/trivial to user per node lists, node masks, etc. It's an obvious abstraction that everyone understands, is simle to understand, acheives exactly the purpose that is needed and is not tied to the /current/ implementation of the current VM memory management code. I don't see any good reason for tying LRUs to MM zones. the original implementation of the per-node shrinkers by Nick Piggin did this: the LRUs for the dentry and inode caches were embedded in the struct zone, and it wasn't generically extensible because of that. i.e. node-aware shrinkers were directly influenced by the zone infrastructure and so the internal implementation of the mm subsystem started leaking out and determining how completely unrelated subsystems need to implement their own cache management..... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html