On 02/24/2010 04:12 PM, Andrew Morton wrote:
On Wed, 24 Feb 2010 15:57:31 -0500 Rik van Riel<riel@xxxxxxxxxx> wrote:
On 02/24/2010 03:52 PM, Andrew Morton wrote:
On Wed, 24 Feb 2010 15:28:44 -0500 Rik van Riel<riel@xxxxxxxxxx> wrote:
If this work could be done synchronously then runtimes become more
consistent, which is a good thing.
Only if it means run times become shorter...
That of course would be a problem to be traded off against the
advantage. One would need to quantify these things to make that call.
I asked a question and all I'm getting in reply is flippancy and
unsubstantiated assertions. It may have been a bad question, but
they're certainly bad answers :(
The hugepage patchset as it stands tries to allocate huge
pages synchronously, but will fall back to normal 4kB pages
if they are not.
Similarly, khugepaged only compacts anonymous memory into
hugepages if/when hugepages become available.
Trying to always allocate hugepages synchronously would
mean potentially having to defragment memory synchronously,
before we can allocate memory for a page fault.
While I have no numbers, I have the strong suspicion that
the performance impact of potentially defragmenting 2MB
of memory before each page fault could lead to more
performance inconsistency than allocating small pages at
first and having them collapsed into large pages later...
The amount of work involved in making a 2MB page available
could be fairly big, which is why I suspect we will be
better off doing it asynchronously - preferably on otherwise
idle CPU core.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>