On Tue, Aug 22, 2017 at 12:08 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > So that migration stuff has a filter on, we need two consecutive numa > faults from the same page_cpupid 'hash', see > should_numa_migrate_memory(). Hmm. That is only called for MPOL_F_MORON. We don't actually know what policy the problem space uses, since tthis is some specialized load. I could easily see somebody having set MPOL_PREFERRED with MPOL_F_LOCAL and then touch it from every single node. Isn't that even the default? > And since this appears to be anonymous memory (no THP) this is all a > single address space. However, we don't appear to invalidate TLBs when > we upgrade the PTE protection bits (not strictly required of course), so > we can have multiple CPUs trip over the same 'old' NUMA PTE. > > Still, generating such a migration storm would be fairly tricky I think. Well, Mel seems to have been unable to generate a load that reproduces the long page waitqueues. And I don't think we've had any other reports of this either. So "quite tricky" may well be exactly what it needs. Likely also with a user load that does something that the people involved in the automatic numa migration would have considered completely insane and never tested or even thought about. Users sometimes do completely insane things. It may have started as a workaround for some particular case where they did something wrong "on purpose", and then they entirely forgot about it, and five years later it's running their whole infrastructure and doing insane things because the "particular case" it was tested with was on some broken preproduction machine with totally broken firmware tables for memory node layout. Linus -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>