On Sun, Mar 8, 2015 at 3:02 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote: > > Well, there's a difference in what we write to the pte: > > #define _PAGE_BIT_NUMA (_PAGE_BIT_GLOBAL+1) > #define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL > > and our expectation was that the two should be equivalent methods from > the POV of the NUMA balancing code, right? Right. But yes, we might have screwed something up. In particular, there might be something that thinks it cares about the global bit, but doesn't notice that the present bit isn't set, so it considers the protnone mappings to be global and causes lots more tlb flushes etc. >> I don't like the old pmdp_set_numa() because it can drop dirty bits, >> so I think the old code was actively buggy. > > Could we, as a silly testing hack not to be applied, write a > hack-patch that re-introduces the racy way of setting the NUMA bit, to > confirm that it is indeed this difference that changes pte visibility > across CPUs enough to create so many more faults? So one of Mel's patches did that, but I don't know if Dave tested it. And thinking about it, it *may* be safe for huge-pages, if they always already have the dirty bit set to begin with. And I don't see how we could have a clean hugepage (apart from the special case of the zeropage, which is read-only, so races on teh dirty bit aren't an issue). So it might actually be that the non-atomic version is safe for hpages. And we could possibly get rid of the "atomic read-and-clear" even for the non-numa case. I'd rather do it for both cases than for just one of them. But: > As a second hack (not to be applied), could we change: > > #define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL > > to: > > #define _PAGE_BIT_PROTNONE (_PAGE_BIT_GLOBAL+1) > > to double check that the position of the bit does not matter? Agreed. We should definitely try that. Dave? Also, is there some sane way for me to actually see this behavior on a regular machine with just a single socket? Dave is apparently running in some fake-numa setup, I'm wondering if this is easy enough to reproduce that I could see it myself. Linus _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs