On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote: > clear_refs_write() uses the 'fullmm' API for invalidating TLBs after > updating the page-tables for the current mm. However, since the mm is not > being freed, this can result in stale TLB entries on architectures which > elide 'fullmm' invalidation. > > Ensure that TLB invalidation is performed after updating soft-dirty > entries via clear_refs_write() by using the non-fullmm API to MMU gather. > > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > --- > fs/proc/task_mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index a76d339b5754..316af047f1aa 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, > count = -EINTR; > goto out_mm; > } > - tlb_gather_mmu_fullmm(&tlb, mm); > + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE); Let's assume my reply to patch 4 is wrong, and therefore we still need tlb_gather/finish_mmu() here. But then wouldn't this change deprive architectures other than ARM the opportunity to optimize based on the fact it's a full-mm flush? It seems to me ARM's interpretation of tlb->fullmm is a special case, not the other way around.