Potential race in TLB flush batching?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Something bothers me about the TLB flushes batching mechanism that Linux
uses on x86 and I would appreciate your opinion regarding it.

As you know, try_to_unmap_one() can batch TLB invalidations. While doing so,
however, the page-table lock(s) are not held, and I see no indication of the
pending flush saved (and regarded) in the relevant mm-structs.

So, my question: what prevents, at least in theory, the following scenario:

	CPU0 				CPU1
	----				----
					user accesses memory using RW PTE 
					[PTE now cached in TLB]
	try_to_unmap_one()
	==> ptep_get_and_clear()
	==> set_tlb_ubc_flush_pending()
					mprotect(addr, PROT_READ)
					==> change_pte_range()
					==> [ PTE non-present - no flush ]

					user writes using cached RW PTE
	...

	try_to_unmap_flush()


As you see CPU1 write should have failed, but may succeed. 

Now I don’t have a PoC since in practice it seems hard to create such a
scenario: try_to_unmap_one() is likely to find the PTE accessed and the PTE
would not be reclaimed.

Yet, isn’t it a problem? Am I missing something?

Thanks,
Nadav
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux