On Sun, 11 Apr 2010 13:52:29 +0200 Ingo Molnar <mingo@xxxxxxx> wrote: > > Also, the proportion of 4K:2MB is a fixed constant, and CPUs dont > grow their TLB caches as much as typical RAM size grows: they'll grow > it according to the _mean_ working set size - while the 'max' working > set gets larger and larger due to the increasing [proportional] gap > to RAM size. > This is why i think we should think about hugetlb support today and > this is why i think we should consider elevating hugetlbs to the next > level of built-in Linux VM support. I respectfully disagree with your analysis. While it is true that the number of "level 1" tlb entries has not kept up with ram or application size, the CPU designers have made it so that there effectively is a "level 2" (or technically, level 3) in the cache. A tlb miss from cache is so cheap that in almost all cases (you can cheat it by using only 1 byte per page, walking randomly through memory and having a strict ordering between those 1 byte accesses) it is hidden in the out of order engine. So in practice, for many apps, as long as the CPU cache scales with application size the TLB more or less scales too. Now hugepages have some interesting other advantages, namely they save pagetable memory..which for something like TPC-C on a fork based database can be a measureable win. -- Arjan van de Ven Intel Open Source Technology Centre For development, discussion and tips for power savings, visit http://www.lesswatts.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>