On Wed, Mar 09, 2016 at 11:55:43PM +0200, Ebru Akagunduz wrote: > Currently khugepaged makes swapin readahead to improve > THP collapse rate. This patch checks vm statistics > to avoid workload of swapin, if unnecessary. So that > when system under pressure, khugepaged won't consume > resources to swapin. > > The patch was tested with a test program that allocates > 800MB of memory, writes to it, and then sleeps. The system > was forced to swap out all. Afterwards, the test program > touches the area by writing, it skips a page in each > 20 pages of the area. When waiting to swapin readahead > left part of the test, the memory forced to be busy > doing page reclaim. There was enough free memory during > test, khugepaged did not swapin readahead due to business. > > Test results: > > After swapped out > ------------------------------------------------------------------- > | Anonymous | AnonHugePages | Swap | Fraction | > ------------------------------------------------------------------- > With patch | 450964 kB | 450560 kB | 349036 kB | %99 | > ------------------------------------------------------------------- > Without patch | 351308 kB | 350208 kB | 448692 kB | %99 | > ------------------------------------------------------------------- > > After swapped in (waiting 10 minutes) > ------------------------------------------------------------------- > | Anonymous | AnonHugePages | Swap | Fraction | > ------------------------------------------------------------------- > With patch | 637932 kB | 559104 kB | 162068 kB | %69 | > ------------------------------------------------------------------- > Without patch | 586816 kB | 464896 kB | 213184 kB | %79 | > ------------------------------------------------------------------- > > Signed-off-by: Ebru Akagunduz <ebru.akagunduz@xxxxxxxxx> > --- > mm/huge_memory.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 7f75292..109a2af 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -102,6 +102,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait); > */ > static unsigned int khugepaged_max_ptes_none __read_mostly; > static unsigned int khugepaged_max_ptes_swap __read_mostly; > +static unsigned long int allocstall = 0; > > static int khugepaged(void *none); > static int khugepaged_slab_init(void); > @@ -2411,6 +2412,7 @@ static void collapse_huge_page(struct mm_struct *mm, > struct mem_cgroup *memcg; > unsigned long mmun_start; /* For mmu_notifiers */ > unsigned long mmun_end; /* For mmu_notifiers */ > + unsigned long events[NR_VM_EVENT_ITEMS], swap = 0; collapse_huge_page() is nested under collapse_huge_page(), so you effectively allocate 2 * NR_VM_EVENT_ITEMS * sizeof(long) on stack. That's a lot for stack. And it's only get total value of ALLOCSTALL event. Should we instead introduce a helper to sum values of a particular event over all cpu? I'm surprised that we don't have any yet. Something like this (totally untested): unsigned long sum_vm_event(enum vm_event_item item) { int cpu; unsigned long ret = 0; get_online_cpus(); for_each_online_cpu(cpu) ret += per_cpu(vm_event_states, cpu).event[item]; put_online_cpus(); return ret; } -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>