Hi Dave, John David Anglin <dave.anglin@xxxxxxxx> writes: > When a page is not present, we get non-access data TLB faults from > the fdc and fic instructions in flush_user_dcache_range_asm and > flush_user_icache_range_asm. When these occur, the cache line is > not invalidated and potentially we get memory corruption. The > problem was hidden by the nullification of the flush instructions. > > These faults also affect performance. With pa8800/pa8900 processors, > there will be 32 faults per 4 KB page since the cache line is 128 > bytes. The will be more faults with earlier processors. > > The problem is fixed by using flush_cache_pages(). It does the flush > using a tmp alias mapping. > > The flush_cache_pages() call in flush_cache_range() flushed too > large a range. > > Signed-off-by: John David Anglin <dave.anglin@xxxxxxxx> > --- > > diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c > index 94150b91c96f..e439b53b0f62 100644 > --- a/arch/parisc/kernel/cache.c > +++ b/arch/parisc/kernel/cache.c > @@ -558,15 +558,6 @@ static void flush_cache_pages(struct vm_area_struct *vma, struct mm_struct *mm, > } > } > > -static void flush_user_cache_tlb(struct vm_area_struct *vma, > - unsigned long start, unsigned long end) > -{ > - flush_user_dcache_range_asm(start, end); > - if (vma->vm_flags & VM_EXEC) > - flush_user_icache_range_asm(start, end); > - flush_tlb_range(vma, start, end); > -} > - > void flush_cache_mm(struct mm_struct *mm) > { > struct vm_area_struct *vma; > @@ -582,13 +573,6 @@ void flush_cache_mm(struct mm_struct *mm) > } > > preempt_disable(); > - if (mm->context == mfsp(3)) { > - for (vma = mm->mmap; vma; vma = vma->vm_next) > - flush_user_cache_tlb(vma, vma->vm_start, vma->vm_end); > - preempt_enable(); > - return; > - } > - > for (vma = mm->mmap; vma; vma = vma->vm_next) > flush_cache_pages(vma, mm, vma->vm_start, vma->vm_end); > preempt_enable(); > @@ -606,13 +590,7 @@ void flush_cache_range(struct vm_area_struct *vma, > } > > preempt_disable(); > - if (vma->vm_mm->context == mfsp(3)) { > - flush_user_cache_tlb(vma, start, end); > - preempt_enable(); > - return; > - } > - > - flush_cache_pages(vma, vma->vm_mm, vma->vm_start, vma->vm_end); > + flush_cache_pages(vma, vma->vm_mm, start, end); > preempt_enable(); > } > I think the preempt_enable()/disable() calls are no longer required. I've added them to fix a bug when the kernel is preempted after the mm->context / mfsp(3) compare but as that is now removed this shouldn't be required anymore.