On Wed, 31 Mar 2010, Andrea Arcangeli wrote: > > Large pages would be more independent from the page table structure with > > the approach that I outlined earlier since you would not have to do these > > sync tricks. > > I was talking about memory compaction. collapse_huge_page will still > be needed forever regardless of split_huge_page existing or not. Right but neither function would not be so page table format dependent as here. > > There are applications that have benefited for years already from 1G page > > sizes (available on IA64 f.e.). So why wait? > > Because the difficulty on finding hugepages free increases > exponentially with the order of allocation. Plus increasing MAX_ORDER > so much would slowdown everything for no gain because we will fail to > obtain 1G pages freed. The cost of compacting 1G pages also is 512 > times bigger than with regular pages. It's not feasible right now with > current memory sizes, I just said it's probably better to move to > PAGE_SIZE 2M instead of extending to 1g pages in a kernel whose > PAGE_SIZE is 4k. You would still want 4k pages for small files. > Last but not the least it can be done but considering I'm abruptly > failing to merge 35 patches (and surely your comments aren't helping > in that direction...), it'd be counter-productive to make the core Well by know you may have realized that I am not too enthusiastic about the approach. But certainly 2M can be done before 1G support. I was not suggesting that 1G support is a requirement. However, 1G and 2M support at the same time would force a cleaner design and maybe get rid of the page table hackery here. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>