Re: [patch 36/36] khugepaged

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 24 Feb 2010 16:24:16 -0500 Rik van Riel <riel@xxxxxxxxxx> wrote:

> The hugepage patchset as it stands tries to allocate huge
> pages synchronously, but will fall back to normal 4kB pages
> if they are not.
> 
> Similarly, khugepaged only compacts anonymous memory into
> hugepages if/when hugepages become available.
> 
> Trying to always allocate hugepages synchronously would
> mean potentially having to defragment memory synchronously,
> before we can allocate memory for a page fault.
> 
> While I have no numbers, I have the strong suspicion that
> the performance impact of potentially defragmenting 2MB
> of memory before each page fault could lead to more
> performance inconsistency than allocating small pages at
> first and having them collapsed into large pages later...
> 
> The amount of work involved in making a 2MB page available
> could be fairly big, which is why I suspect we will be
> better off doing it asynchronously - preferably on otherwise
> idle CPU core.

Sounds right.  How much CPU consumption are we seeing from khugepaged?

The above-quoted text would make a good addition to the (skimpy)
changelog!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]