On Wed, Jan 03, 2024 at 12:26:29PM -0800, Dave Hansen wrote: > On 1/3/24 10:05, Catalin Marinas wrote: > >> --- a/mm/mmu_gather.c > >> +++ b/mm/mmu_gather.c > >> @@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb) > >> * On x86 non-fullmm doesn't yield significant difference > >> * against fullmm. > >> */ > >> - tlb->fullmm = 1; > >> + tlb->need_flush_all = 1; > >> __tlb_reset_range(tlb); > >> tlb->freed_tables = 1; > >> } > > The optimisation here was added about a year later in commit > > 7a30df49f63a ("mm: mmu_gather: remove __tlb_reset_range() for force > > flush"). Do we still need to keep freed_tables = 1 here? I'd say only > > __tlb_reset_range(). > > I think the __tlb_reset_range() can be dangerous if it clears > ->freed_tables. On x86 at least, it might lead to skipping the TLB IPI > for CPUs that are in lazy TLB mode. When those wake back up they might > start using the freed page tables. You are right, I did not realise freed_tables is reset in __tlb_reset_range(). -- Catalin