Re: TLB and PTE coherency during munmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 29, 2013 at 05:15:28AM +0100, Max Filippov wrote:
> On Tue, May 28, 2013 at 6:35 PM, Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > On 26 May 2013 03:42, Max Filippov <jcmvbkbc@xxxxxxxxx> wrote:
> >> Is it intentional that threads of a process that invoked munmap syscall
> >> can see TLB entries pointing to already freed pages, or it is a bug?
> >
> > If it happens, this would be a bug. It means that a process can access
> > a physical page that has been allocated to something else, possibly
> > kernel data.
> >
> >> I'm talking about zap_pmd_range and zap_pte_range:
> >>
> >>       zap_pmd_range
> >>         zap_pte_range
> >>           arch_enter_lazy_mmu_mode
> >>             ptep_get_and_clear_full
> >>             tlb_remove_tlb_entry
> >>             __tlb_remove_page
> >>           arch_leave_lazy_mmu_mode
> >>         cond_resched
> >>
> >> With the default arch_{enter,leave}_lazy_mmu_mode, tlb_remove_tlb_entry
> >> and __tlb_remove_page there is a loop in the zap_pte_range that clears
> >> PTEs and frees corresponding pages, but doesn't flush TLB, and
> >> surrounding loop in the zap_pmd_range that calls cond_resched. If a thread
> >> of the same process gets scheduled then it is able to see TLB entries
> >> pointing to already freed physical pages.
> >
> > It looks to me like cond_resched() here introduces a possible bug but
> > it depends on the actual arch code, especially the
> > __tlb_remove_tlb_entry() function. On ARM we record the range in
> > tlb_remove_tlb_entry() and queue the pages to be removed in
> > __tlb_remove_page(). It pretty much acts like tlb_fast_mode() == 0
> > even for the UP case (which is also needed for hardware speculative
> > TLB loads). The tlb_finish_mmu() takes care of whatever pages are left
> > to be freed.
> >
> > With a dummy __tlb_remove_tlb_entry() and tlb_fast_mode() == 1,
> > cond_resched() in zap_pmd_range() would cause problems.
> 
> So, looks like most architectures in the UP configuration should have
> this issue (unless they flush TLB in the switch_mm, even when switching
> to the same mm):

switch_mm() wouldn't be called if switching to the same mm. You could do
it in switch_to() but it's not efficient (or before returning to user
space on the same processor).

Do you happen to have a user-space test for this? Something like one
thread does an mmap(), writes some poison value, munmap(). The other
thread keeps checking the poison value while trapping and ignoring any
SIGSEGV. If it's working correctly, the second thread should either get
a SIGSEGV or read the poison value.

-- 
Catalin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]