Re: [patch 01/15] mm/memory.c: avoid access flag update TLB flush for retried page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/24/20 6:29 PM, Linus Torvalds wrote:
On Fri, Jul 24, 2020 at 5:37 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
A follow-up question about your comment in the previous email "The
notion of "this is a retry, so let's do nothing" is fundamentally
wrong.", do you mean it is not safe?
I mean it fails my "smell test".

The patch didn't just avoid the TLB flush, it avoided all the other
"mark it dirty and young" things too. And that made me go "why would
RETRY be different in this regard"?

It sounds unsafe, because it basically means that a retry does
something else than the initial page fault handling would do.

See what worries me and makes me go "that's not safe"?

Or since we have pte_same check, we
should just rely on it to skip unnecessary TLB flush?
Right. That makes me much happier, because if the retry flag is only
used to avoid a TLB flush (when the pte's are identical, of course),
then I feel that the retry path is _logically_ all the same. The page
tables end up looking exactly the same, and the only difference is
whether we do that TLB invalidate for a spurious fault.

And that, in turn, makes me feel it is safe, because even if it turns
out that "yes, we keep getting a spurious fault because we have some
stale TLB entries", then checking the RETRY bit is fine: we'll do a
full page fault next time around without the retry bit set.

So that's why I feel that your patch is sketchy and unsafe, but I
don't worry about testing the RETRY bit in that "clear spurious TLB
entries" case.

See?

Yes, I got your point. Thanks for elaborating.


Can somebody flesh out the comment about the
"spurious_protection_fault()" thing? Because something like this I
wouldn't mind, but I'd like that comment to explain the
FAULT_FLAG_WRITE part too.
I'm not quite familiar with other architectures, my wild guess is
FAULT_FLAG_WRITE is a cheap way to tell us if this is a .text page or
not.
Yes. However, I'm not seeing why a text page would be so special.

IOW, if it's ok to skip the TLB flush fo ra text page, then why isn't
it ok to skip for a normal page?

It looks normal page is skipped too unless it is a write fault. The comment might be a little bit misleading.

Read fault should just change young bit and typically TLB won't get flushed if just young bit is changed and TLB flush can be deferred again to write fault which may change access permission and/or dirty bit.


My suspicion is that we have stale TLB entries for potentially
multiple different reasons:

  - software optimizations, where we decide "skip the TLB flush,
because it's expensive and it is likely to never matter".

    I have a _memory_ of us doing this when we have a pure "loosening"
of the protections (IOW, make something writable that wasn't writable
before), but I can't actually find the code. I'm thinking things like
the wp_page_reuse() case.

  - temporarily stale TLB entries because we've _just_updated them on
another CPU, but it hasn't gotten to the actual TLB flush yet.

    By the time we actually get to this point, we'll have serialized
with the page table lock, but the *fault* happened when the CPU saw
the original stale TLB entry, so we took the fault with what is now a
stale TLB entry.

  - actual software bugs where we've not flushed the TLB properly.

Anyway, the _reason_ for that "flush_tlb_fix_spurious_fault()" is that
some architectures don't flush their TLB on a fault.

So if you don't flush the TLB when talking a page fault, and you may
have these stale TLB entries around, you'll just keep faulting until
enough other system event happens that just ends up flushing the TLB
sufficiently.

On an otherwise idle system, that "keep faulting until enough other
system event happens" might be effectively forever.

For any architecture that guarantees that a page fault will always
flush the old TLB entry for this kind of situation, that
flush_tlb_fix_spurious_fault() thing can be a no-op.

So that's why on x86, we just do

   #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)

and have no issues.

Note that it does *not* need to do any cross-CPU flushing or anything
like that. So it's actually wrong (I think) to have that default
fallback for

    #define flush_tlb_fix_spurious_fault(vma, address)
flush_tlb_page(vma, address)

because flush_tlb_page() is the serious "do cross CPU etc".

Does the arm64 flush_tlb_page() perhaps do the whole expensive
cross-CPU thing rather than the much cheaper "just local invalidate"
version?

The "random letter combination" thing that ARM documentation uses for
these things is really confusing, but I think the "is" in "vale1is"
means that it's broadcast to all "inner sharable" - ie CPU cores.

I get the feeling that on arm64, flush_tlb_fix_spurious_fault() should
either be a no-op, or it should perhaps be a non-broadcasting version
of the TLB invalidates, and use just "vale1"

Catalin? Will?

                 Linus





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux