* Davidlohr Bueso <davidlohr@xxxxxx> wrote: > I will look into doing the vma cache per thread instead of mm (I hadn't > really looked at the problem like this) as well as Ingo's suggestion on > the weighted LRU approach. However, having seen that we can cheaply and > easily reach around ~70% hit rate in a lot of workloads, makes me wonder > how good is good enough? So I think it all really depends on the hit/miss cost difference. It makes little sense to add a more complex scheme if it washes out most of the benefits! Also note the historic context: the _original_ mmap_cache, that I implemented 16 years ago, was a front-line cache to a linear list walk over all vmas (!). This is the relevant 2.1.37pre1 code in include/linux/mm.h: /* Look up the first VMA which satisfies addr < vm_end, NULL if none. */ static inline struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr) { struct vm_area_struct *vma = NULL; if (mm) { /* Check the cache first. */ vma = mm->mmap_cache; if(!vma || (vma->vm_end <= addr) || (vma->vm_start > addr)) { vma = mm->mmap; while(vma && vma->vm_end <= addr) vma = vma->vm_next; mm->mmap_cache = vma; } } return vma; } See that vma->vm_next iteration? It was awful - but back then most of us had at most a couple of megs of RAM with just a few vmas. No RAM, no SMP, no worries - the mm was really simple back then. Today we have the vma rbtree, which is self-balancing and a lot faster than your typical linear list walk search ;-) So I'd _really_ suggest to first examine the assumptions behind the cache, it being named 'cache' and it having a hit rate does in itself not guarantee that it gives us any worthwile cost savings when put in front of an rbtree ... Thanks, Ingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>