Hi Dave, On Mon, Aug 15, 2011 at 1:46 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > That's usage for the entire slab, though, and we don't have a dentry > slab per superblock so I don't think that helps us. And with slab > merging, I think that even if we did have a slab per superblock, > they'd end up in the same slab context anyway, right? You could add a flag to disable slab merging but there's no sane way to fix the per-superblock thing in slab. On Mon, Aug 15, 2011 at 1:46 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > Ideally what we need is a slab, LRU and shrinkers all rolled into a > single infrastructure handle so we can simply set them up per > object, per context etc and not have to re-invent the wheel for > every single slab cache/LRU/shrinker setup we have in the kernel. > > I've got a rough node-aware generic LRU/shrinker infrastructure > prototype that is generic enough for most of the existing slab > caches with shrinkers, but I haven't looked at what is needed to > integrate it with the slab cache code. That's mainly because I don't > like the idea of having to implement the same thing 3 times in 3 > different ways and debug them all before anyone would consider it > for inclusion in the kernel. > > Once I've sorted out the select_parent() use-the-LRU-for-disposal > abuse and have a patch set that survives a 'rm -rf *' operation, > maybe we can then talk about what is needed to integrate stuff into > the slab caches.... Well, now that I really understand what you're trying to do here, it's probably best to keep slab as-is and implement "slab accounting" on top of it. You'd have something like you do now but in slightly more generic form: struct kmem_accounted_cache { struct kmem_cache *cache; /* ... statistics... */ } void *kmem_accounted_alloc(struct kmem_accounted_cache *c) { if (/* within limits */) return kmem_cache_alloc(c->cache); return NULL; } Does something like that make sense to you? Pekka -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html