On Sun, May 09, 2010 at 12:56:49PM -0700, Linus Torvalds wrote: > > > On Sun, 9 May 2010, Mel Gorman wrote: > > > > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs > > move_page_tables() needs. To avoid overallocating, it has to follow the same > > logic as move_page_tables duplicating some code in the process. The ugliest > > aspect of all is passing those pre-allocated pages back into move_page_tables > > where they need to be passed down to such functions as __pte_alloc. It turns > > extremely messy. > > Umm. What? > > That's crazy talk. I'm not talking about preallocating stuff in order to > pass it in to move_page_tables(). I'm talking about just _creating_ the > dang page tables early - preallocating them IN THE PROCESS VM SPACE. > Ok, I took the totally wrong approach by pre-allocating the pages and passing them in to move_page_tables. Pages from the list were then taken in preference to calling the page allocator. At the time, I thought the cleanup for ENOMEM would be easier as well as avoiding complications with overlapping temporary stack and new stack location. It was a bad choice, wrong and damn fugly. > IOW, a patch like this (this is a pseudo-patch, totally untested, won't > compile, yadda yadda - you need to actually make the people who call > "move_page_tables()" call that prepare function first etc etc) > Sounds reasonable as a general approach. I'll tinker with it for a bit to see if exec can be made any faster when combined with your other suggestion to avoid mprotect_fixup. In the meantime though, can the patch that avoids migrating within the temporary stack be picked up? It closes the worst of the migrate vs exec race without impacting performance. At least, I'm no longer able to cause an oops with it applied. > Yeah, if we care about holes in the page tables, we can certainly copy > more of the move_page_tables() logic, but it certainly doesn't matter for > execve(). This just makes sure that the destination page tables exist > first. > > Linus > > --- > mm/mremap.c | 22 +++++++++++++++++++++- > 1 files changed, 21 insertions(+), 1 deletions(-) > > diff --git a/mm/mremap.c b/mm/mremap.c > index cde56ee..c14505c 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > #define LATENCY_LIMIT (64 * PAGE_SIZE) > > +/* > + * Preallocate the page tables, so that we can do the actual move > + * without any allocations, and thus no error handling etc. > + */ > +int prepare_move_page_tables(struct vm_area_struct *vma, > + unsigned long old_addr, struct vm_area_struct *new_vma, > + unsigned long new_addr, unsigned long len) > +{ > + unsigned long end_addr = new_addr + len; > + > + while (new_addr < end_addr) { > + pmd_t *new_pmd; > + new_pmd = alloc_new_pmd(vma->vm_mm, new_addr); > + if (!new_pmd) > + return -ENOMEM; > + new_addr = (new_addr + PMD_SIZE) & PMD_MASK; > + } > + return 0; > +} > + > unsigned long move_page_tables(struct vm_area_struct *vma, > unsigned long old_addr, struct vm_area_struct *new_vma, > unsigned long new_addr, unsigned long len) > @@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, > old_pmd = get_old_pmd(vma->vm_mm, old_addr); > if (!old_pmd) > continue; > - new_pmd = alloc_new_pmd(vma->vm_mm, new_addr); > + new_pmd = get_old_pmd(vma->vm_mm, new_addr); > if (!new_pmd) > break; > next = (new_addr + PMD_SIZE) & PMD_MASK; > -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>