On Tue, Apr 12, 2011 at 8:48 AM, Robert ÅwiÄcki <robert@xxxxxxxxxxx> wrote: >> >> Hmm. Sounds like an endless loop in kernel mode. >> >> Use "perf record -ag" as root, it should show up very clearly in the report. > > I've put some data here - > http://groups.google.com/group/fa.linux.kernel/browse_thread/thread/4345dcc4f7750ce2 > - I think it's somewhat connected (sys_mlock appears on both cases). Ok, so it's definitely sys_mlock. And I suspect it's due to commit 53a7706d5ed8 somehow looping forever. One possible cause would be how that commit made things care deeply about the return value of __get_user_pages(), and in particular what happens when that return value is zero. It ends up looping forever in do_mlock_pages() for that case, because it does nend = nstart + ret * PAGE_SIZE; so now the next round we'll set "nstart = nend" and start all over. I see at least one way __get_user_pages() will return zero, and it's if it is passed a npages of 0 to begin with. Which can easily happen if you try to mlock() the first page of a stack segment: the code will jump over that stack segment page, and then have nothing to do, and return zero. So then do_mlock_pages() will just keep on trying again. THIS IS A HACKY AND UNTESTED PATCH! It's ugly as hell, because the real problem is do_mlock_pages() caring too damn much about the return value, and us hiding the whole stack page thing in that function. I wouldn't want to commit it as-is, but if you can easily reproduce the problem, it's a good patch to test out the theory. Assuming I didn't screw something up. Again, TOTALLY UNTESTED! Linus
mm/mlock.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 2689a08c79af..080c219973ea 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -162,6 +162,7 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma, unsigned long addr = start; int nr_pages = (end - start) / PAGE_SIZE; int gup_flags; + long retval, offset; VM_BUG_ON(start & ~PAGE_MASK); VM_BUG_ON(end & ~PAGE_MASK); @@ -189,13 +190,20 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma, gup_flags |= FOLL_MLOCK; /* We don't try to access the guard page of a stack vma */ + offset = 0; if (stack_guard_page(vma, start)) { addr += PAGE_SIZE; nr_pages--; + offset = 1; } - return __get_user_pages(current, mm, addr, nr_pages, gup_flags, + retval = __get_user_pages(current, mm, addr, nr_pages, gup_flags, NULL, NULL, nonblocking); + + /* Get the return value correct even in the face of the guard page */ + if (retval < 0) + return offset ? : retval; + return retval + offset; } /*