Re: hugepages will matter more in the future

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, 12 Apr 2010, Rik van Riel wrote:

> On 04/11/2010 11:52 AM, Linus Torvalds wrote:
> 
> > So here's the deal: make the code cleaner, and it's fine. And stop trying
> > to sell it with _crap_.
> 
> Since none of the hugepages proponents in this thread seem to have
> asked this question:
> 
> What would you like the code to look like, in order for hugepages
> code to be acceptable to you?

So as I already commented to Andrew, the code has no comments about the 
"big picture", and the largest comment I found was about a totally 
_trivial_ issue about replacing the hugepage by first clearing the entry, 
then flushing the tlb, and then filling it.

That needs hardly any comment at all, since that's what we do for _normal_ 
page table entries too when we change anything non-trivial about them. 
That's the anti-thesis of rocket science. Yet that was apparently 
considered the most important thing in the whole core patch to talk about!

And quite frankly, I've been irritated by the "timings" used to sell this 
thing from the start. The changelog for the entry makes a big deal out of 
the fact that there's just a single page fault per 2MB, and that the page 
timing for clearing a huge region is faster the first time because you 
don't take a lot of page faults.

That's a "Duh!" moment too, but it never even talks about the issue of 
"oh, well, we did allocate all those 2M chunks, not knowing whether they 
were going to be used or not".

Sure, it's going to help programs that actually use all of it. Nobody is 
surprised. What I still care about, what what makes _all_ the timings I've 
seen in this whole insane thread pretty much totally useless, is the fact 
that we used to know that what _really_ speeds up a machine is caching. 
Keeping _relevant_ data around so that you don't do IO. And the mantra 
from pretty much day one has been "free memory is wasted memory".

Yet now, the possibility of _truly_ wasting memory isn't apparently even a 
blip on anybody's radar. People blithely talk about changing glibc default 
behavior as if there are absolutely no issues, and 2MB chunks are pocket 
change.

I can pretty much guarantee that every single developer on this list has a 
machine with excessive amounts of memory compared to what the machine is 
actually required to do. And I just do not think that is true in general.

				Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]