On Mon, Feb 24, 2014 at 5:16 PM, Davidlohr Bueso <davidlohr@xxxxxx> wrote: > > If we add the two missing bits to the shifting and use PAGE_SHIFT (x86 > at least) we get just as good results as with 10. So we would probably > prefer hashing based on the page number and not some offset within the > page. So just int idx = (addr >> PAGE_SHIFT) & 3; works fine? That makes me think it all just wants to be maximally spread out to approximate some NRU when adding an entry. Also, as far as I can tell, "vmacache_update()" should then become just a simple unconditional int idx = (addr >> PAGE_SHIFT) & 3; current->vmacache[idx] = newvma; because your original code did + if (curr->vmacache[idx] != newvma) + curr->vmacache[idx] = newvma; and that doesn't seem to make sense, since if "newvma" was already in the cache, then we would have found it when looking up, and we wouldn't be here updating it after doing the rb-walk? And with the per-mm cache removed, all that should remain is that simple version, no? You don't even need the "check the vmcache sequence number and clear if bogus", because the rule should be that you have always done a "vmcache_find()" first, which should have done that.. Anyway, can you send the final cleaned-up and simplfied (and re-tested) version? There's enough changes discussed here that I don't want to track the end result mentally.. Linus -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>