On Sun 22-02-15 15:42:16, Shachar Raindel wrote: > When do_wp_page is ending, in several cases it needs to unlock the > pages and ptls it was accessing. > > Currently, this logic was "called" by using a goto jump. This makes > following the control flow of the function harder. Readability was > further hampered by the unlock case containing large amount of logic > needed only in one of the 3 cases. > > Using goto for cleanup is generally allowed. However, moving the > trivial unlocking flows to the relevant call sites allow deeper > refactoring in the next patch. > > Signed-off-by: Shachar Raindel <raindel@xxxxxxxxxxxx> > Acked-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > Acked-by: Rik van Riel <riel@xxxxxxxxxx> > Acked-by: Andi Kleen <ak@xxxxxxxxxxxxxxx> > Acked-by: Haggai Eran <haggaie@xxxxxxxxxxxx> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxx> > Cc: Matthew Wilcox <matthew.r.wilcox@xxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx> > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Cc: Peter Feiner <pfeiner@xxxxxxxxxx> > Cc: Michel Lespinasse <walken@xxxxxxxxxx> Neat! Reviewed-by: Michal Hocko <mhocko@xxxxxxx> > --- > mm/memory.c | 21 ++++++++++++--------- > 1 file changed, 12 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 7a04414..3afd9ce 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2066,7 +2066,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, > { > struct page *old_page, *new_page = NULL; > pte_t entry; > - int ret = 0; > + int page_copied = 0; > unsigned long mmun_start = 0; /* For mmu_notifiers */ > unsigned long mmun_end = 0; /* For mmu_notifiers */ > struct mem_cgroup *memcg; > @@ -2101,7 +2101,9 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, > &ptl); > if (!pte_same(*page_table, orig_pte)) { > unlock_page(old_page); > - goto unlock; > + pte_unmap_unlock(page_table, ptl); > + page_cache_release(old_page); > + return 0; > } > page_cache_release(old_page); > } > @@ -2148,7 +2150,9 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, > &ptl); > if (!pte_same(*page_table, orig_pte)) { > unlock_page(old_page); > - goto unlock; > + pte_unmap_unlock(page_table, ptl); > + page_cache_release(old_page); > + return 0; > } > page_mkwrite = 1; > } > @@ -2246,29 +2250,28 @@ gotten: > > /* Free the old page.. */ > new_page = old_page; > - ret |= VM_FAULT_WRITE; > + page_copied = 1; > } else > mem_cgroup_cancel_charge(new_page, memcg); > > if (new_page) > page_cache_release(new_page); > -unlock: > + > pte_unmap_unlock(page_table, ptl); > - if (mmun_end > mmun_start) > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); > + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); > if (old_page) { > /* > * Don't let another task, with possibly unlocked vma, > * keep the mlocked page. > */ > - if ((ret & VM_FAULT_WRITE) && (vma->vm_flags & VM_LOCKED)) { > + if (page_copied && (vma->vm_flags & VM_LOCKED)) { > lock_page(old_page); /* LRU manipulation */ > munlock_vma_page(old_page); > unlock_page(old_page); > } > page_cache_release(old_page); > } > - return ret; > + return page_copied ? VM_FAULT_WRITE : 0; > oom_free_new: > page_cache_release(new_page); > oom: > -- > 1.7.11.2 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@xxxxxxxxx. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>