On Sat, 2018-09-29 at 11:37 +1000, Nicholas Piggin wrote: > Hi, > > Did you get a chance to look at these? > > This first patch 1/11 solves the lockup problem that Guenter reported > with my changes to core mm code. So I plan to resubmit my patches > to Andrew's -mm tree with this patch to avoid nios2 breakage. > > Thanks, > Nick Do you have git repo that contains these patches? If not, can you send them as attachment to my email? Regards Ley Foon > > On Mon, 24 Sep 2018 01:08:20 +1000 > Nicholas Piggin <npiggin@xxxxxxxxx> wrote: > > > > > Fault paths like do_read_fault will install a Linux pte with the > > young > > bit clear. The CPU will fault again because the TLB has not been > > updated, this time a valid pte exists so handle_pte_fault will just > > set the young bit with ptep_set_access_flags, which flushes the > > TLB. > > > > The TLB is flushed so the next attempt will go to the fast TLB > > handler > > which loads the TLB with the new Linux pte. The access then > > proceeds. > > > > This design is fragile to depend on the young bit being clear after > > the initial Linux fault. A proposed core mm change to immediately > > set > > the young bit upon such a fault, results in ptep_set_access_flags > > not > > flushing the TLB because it finds no change to the pte. The > > spurious > > fault fix path only flushes the TLB if the access was a store. If > > it > > was a load, then this results in an infinite loop of page faults. > > > > This change adds a TLB flush in update_mmu_cache, which removes > > that > > TLB entry upon the first fault. This will cause the fast TLB > > handler > > to load the new pte and avoid the Linux page fault entirely. > > > > Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> > > --- > > arch/nios2/mm/cacheflush.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/arch/nios2/mm/cacheflush.c > > b/arch/nios2/mm/cacheflush.c > > index 506f6e1c86d5..d58e7e80dc0d 100644 > > --- a/arch/nios2/mm/cacheflush.c > > +++ b/arch/nios2/mm/cacheflush.c > > @@ -204,6 +204,8 @@ void update_mmu_cache(struct vm_area_struct > > *vma, > > struct page *page; > > struct address_space *mapping; > > > > + flush_tlb_page(vma, address); > > + > > if (!pfn_valid(pfn)) > > return; > >