Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > On Mon, 17 Jul 2017 11:02:46 -0700 Nadav Amit <namit@xxxxxxxxxx> wrote: > >> Setting and clearing mm->tlb_flush_pending can be performed by multiple >> threads, since mmap_sem may only be acquired for read in task_numa_work. >> If this happens, tlb_flush_pending may be cleared while one of the >> threads still changes PTEs and batches TLB flushes. >> >> As a result, TLB flushes can be skipped because the indication of >> pending TLB flushes is lost, for instance due to race between >> migration and change_protection_range (just as in the scenario that >> caused the introduction of tlb_flush_pending). >> >> The feasibility of such a scenario was confirmed by adding assertion to >> check tlb_flush_pending is not set by two threads, adding artificial >> latency in change_protection_range() and using sysctl to reduce >> kernel.numa_balancing_scan_delay_ms. >> >> Fixes: 20841405940e ("mm: fix TLB flush race between migration, and >> change_protection_range") > > The changelog doesn't describe the user-visible effects of the bug (it > should always do so, please). But it is presumably a data-corruption > bug so I suggest that a -stable backport is warranted? Yes, although I did not encounter an actual memory corruption. > > It has been there for 4 years so I'm thinking we can hold off a > mainline (and hence -stable) merge until 4.13-rc1, yes? > > > One thought: > >> --- a/include/linux/mm_types.h >> +++ b/include/linux/mm_types.h >> >> ... >> >> @@ -528,11 +528,11 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) >> static inline bool mm_tlb_flush_pending(struct mm_struct *mm) >> { >> barrier(); >> - return mm->tlb_flush_pending; >> + return atomic_read(&mm->tlb_flush_pending) > 0; >> } >> static inline void set_tlb_flush_pending(struct mm_struct *mm) >> { >> - mm->tlb_flush_pending = true; >> + atomic_inc(&mm->tlb_flush_pending); >> >> /* >> * Guarantee that the tlb_flush_pending store does not leak into the >> @@ -544,7 +544,7 @@ static inline void set_tlb_flush_pending(struct mm_struct *mm) >> static inline void clear_tlb_flush_pending(struct mm_struct *mm) >> { >> barrier(); >> - mm->tlb_flush_pending = false; >> + atomic_dec(&mm->tlb_flush_pending); >> } >> #else > > Do we still need the barrier()s or is it OK to let the atomic op do > that for us (with a suitable code comment). I will submit v2. However, I really don’t understand the comment on mm_tlb_flush_pending(): /* * Memory barriers to keep this state in sync are graciously provided by * the page table locks, outside of which no page table modifications happen. * The barriers below prevent the compiler from re-ordering the instructions * around the memory barriers that are already present in the code. */ But IIUC migrate_misplaced_transhuge_page() does not call mm_tlb_flush_pending() while the ptl is taken. Mel, can I bother you again? Should I move the flush in migrate_misplaced_transhuge_page() till after the ptl is taken? Thanks, Nadav -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href