On 04/11/2010 06:22 PM, Linus Torvalds wrote:
On Sun, 11 Apr 2010, Ingo Molnar wrote:
Both Xorg, xterms and firefox have rather huge RSS's on my boxes. (Even a
phone these days easily has more than 512 MB RAM.) Andrea measured
multi-percent improvement in gcc performance. I think it's real.
Reality check: he got multiple percent with
- one huge badly written file being compiled that took 22s because it's
such a horrible monster.
Not everything is a kernel build. Template heavy C++ code will also
allocate tons of memory. gcc -flto will also want lots of memory.
- magic libc malloc flags tghat are totally and utterly unrealistic in
anything but a benchmark
Having glibc allocate in chunks of 2MB instead of 1MB is not
unrealistic. I agree about MMAP_THRESHOLD.
- by basically keeping one CPU totally busy doing defragmentation.
I never saw khugepaged take any significant amount of cpu.
Quite frankly, that kind of "performance analysis" makes me _less_
interested rather than more. Because all it shows is that you're willing
to do anything at all to get better numbers, regardless of whether it is
_realistic_ or not.
Seriously, guys. Get a grip. If you start talking about special malloc
algorithms, you have ALREADY LOST. Google for memory fragmentation with
various malloc implementations in multi-threaded applications. Thinking
that you can just allocate in 2MB chunks is so _fundamnetally_ broken that
this whole thread should have been laughed out of the room.
And yet Oracle and java have options to use large pages, and we know
google and HPC like 'em. Maybe they just haven't noticed the
fundamental brokenness yet.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>