On Mon, Sep 16, 2019 at 09:35:21AM +0000, Justin He (Arm Technology China) wrote: > > Hi Kirill > > -----Original Message----- > > From: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> > > Sent: 2019年9月16日 17:16 > > To: Justin He (Arm Technology China) <Justin.He@xxxxxxx> > > Cc: Catalin Marinas <Catalin.Marinas@xxxxxxx>; Will Deacon > > <will@xxxxxxxxxx>; Mark Rutland <Mark.Rutland@xxxxxxx>; James Morse > > <James.Morse@xxxxxxx>; Marc Zyngier <maz@xxxxxxxxxx>; Matthew > > Wilcox <willy@xxxxxxxxxxxxx>; Kirill A. Shutemov > > <kirill.shutemov@xxxxxxxxxxxxxxx>; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx; > > linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Punit Agrawal > > <punitagrawal@xxxxxxxxx>; Anshuman Khandual > > <Anshuman.Khandual@xxxxxxx>; Jun Yao <yaojun8558363@xxxxxxxxx>; > > Alex Van Brunt <avanbrunt@xxxxxxxxxx>; Robin Murphy > > <Robin.Murphy@xxxxxxx>; Thomas Gleixner <tglx@xxxxxxxxxxxxx>; > > Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>; Jérôme Glisse > > <jglisse@xxxxxxxxxx>; Ralph Campbell <rcampbell@xxxxxxxxxx>; > > hejianet@xxxxxxxxx > > Subject: Re: [PATCH v3 2/2] mm: fix double page fault on arm64 if PTE_AF > > is cleared > > > > On Sat, Sep 14, 2019 at 12:32:39AM +0800, Jia He wrote: > > > When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, > > there > > > will be a double page fault in __copy_from_user_inatomic of > > cow_user_page. > > > > > > Below call trace is from arm64 do_page_fault for debugging purpose > > > [ 110.016195] Call trace: > > > [ 110.016826] do_page_fault+0x5a4/0x690 > > > [ 110.017812] do_mem_abort+0x50/0xb0 > > > [ 110.018726] el1_da+0x20/0xc4 > > > [ 110.019492] __arch_copy_from_user+0x180/0x280 > > > [ 110.020646] do_wp_page+0xb0/0x860 > > > [ 110.021517] __handle_mm_fault+0x994/0x1338 > > > [ 110.022606] handle_mm_fault+0xe8/0x180 > > > [ 110.023584] do_page_fault+0x240/0x690 > > > [ 110.024535] do_mem_abort+0x50/0xb0 > > > [ 110.025423] el0_da+0x20/0x24 > > > > > > The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): > > > [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, > > pmd=000000023d4b3003, pte=360000298607bd3 > > > > > > As told by Catalin: "On arm64 without hardware Access Flag, copying > > from > > > user will fail because the pte is old and cannot be marked young. So we > > > always end up with zeroed page after fork() + CoW for pfn mappings. we > > > don't always have a hardware-managed access flag on arm64." > > > > > > This patch fix it by calling pte_mkyoung. Also, the parameter is > > > changed because vmf should be passed to cow_user_page() > > > > > > [1] > > https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork > > > > > > Reported-by: Yibo Cai <Yibo.Cai@xxxxxxx> > > > Signed-off-by: Jia He <justin.he@xxxxxxx> > > > --- > > > mm/memory.c | 30 +++++++++++++++++++++++++----- > > > 1 file changed, 25 insertions(+), 5 deletions(-) > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index e2bb51b6242e..a64af6495f71 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly = > > > 2; > > > #endif > > > > > > +#ifndef arch_faults_on_old_pte > > > +static inline bool arch_faults_on_old_pte(void) > > > +{ > > > + return false; > > > +} > > > +#endif > > > + > > > static int __init disable_randmaps(char *s) > > > { > > > randomize_va_space = 0; > > > @@ -2140,7 +2147,8 @@ static inline int pte_unmap_same(struct > > mm_struct *mm, pmd_t *pmd, > > > return same; > > > } > > > > > > -static inline void cow_user_page(struct page *dst, struct page *src, > > unsigned long va, struct vm_area_struct *vma) > > > +static inline void cow_user_page(struct page *dst, struct page *src, > > > + struct vm_fault *vmf) > > > { > > > debug_dma_assert_idle(src); > > > > > > @@ -2152,20 +2160,32 @@ static inline void cow_user_page(struct page > > *dst, struct page *src, unsigned lo > > > */ > > > if (unlikely(!src)) { > > > void *kaddr = kmap_atomic(dst); > > > - void __user *uaddr = (void __user *)(va & PAGE_MASK); > > > + void __user *uaddr = (void __user *)(vmf->address & > > PAGE_MASK); > > > + pte_t entry; > > > > > > /* > > > * This really shouldn't fail, because the page is there > > > * in the page tables. But it might just be unreadable, > > > * in which case we just give up and fill the result with > > > - * zeroes. > > > + * zeroes. If PTE_AF is cleared on arm64, it might > > > + * cause double page fault. So makes pte young here > > > */ > > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) > > { > > > + spin_lock(vmf->ptl); > > > + entry = pte_mkyoung(vmf->orig_pte); > > > > Should't you re-validate that orig_pte after re-taking ptl? It can be > > stale by now. > Thanks, do you mean flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)) > before pte_mkyoung? No. You need to check pte_same(*vmf->pte, vmf->orig_pte) before modifying anything and bail out if *vmf->pte has changed under you. -- Kirill A. Shutemov