Looks the right fix is to add cache flush to tlb_start_vma(). See the patch attached. Unless someone objects, I will check it later. BTW, I really don't like the function naming of tlb_start_vma() and tbl_end_vma(). :) Jun On Wed, Jan 14, 2004 at 10:23:16PM -0800, David S. Miller wrote: > On Wed, 14 Jan 2004 17:40:12 -0800 > Jun Sun <jsun@mvista.com> wrote: > > > Looking at my tree (which is from linux-mips.org), it appears > > arm, sparc, sparc64, and sh have tlb_start_vma() defined to call > > cache flushing. > > Correct, in fact every platform where cache flushing matters > at all (ie. where flush_cache_*() routines actually need to > flush a cpu cache), they should have tlb_start_vma() do such > a flush. > > > What exactly does tlb_start_vma()/tlb_end_vma() mean? There is > > only one invocation instance, which is significant enough to infer > > the meaning. :) > > When the kernel unmaps a mmap region of a process (either for the > sake of munmap() or tearing down all mapping during exit()) tlb_start_vma() > is called, the page table mappings in the region are torn down one by > one, then a tlb_end_vma() call is made. > > At the top level, ie. whoever invokes unmap_page_range(), there will > be a tlb_gather_mmu() call. > > In order to properly optimize the cache flushes, most platforms do the > following: > > 1) The tlb->fullmm boolean keeps trap of whether this is just a munmap() > unmapping operation (if zero) or a full address space teardown > (if non-zero). > > 2) In the full address space teardown case, and thus tlb->fullmm is > non-zero, the top level will do the explict flush_cache_mm() > (see mm/mmap.c:exit_mmap()), therefore the tlb_start_vma() > implementation need not do the flush, otherwise it does. > > This is why sparc64 and friends implement it like this: > > #define tlb_start_vma(tlb, vma) \ > do { if (!(tlb)->fullmm) \ > flush_cache_range(vma, vma->vm_start, vma->vm_end); \ > } while (0) > > Hope this clears things up. > > Someone should probably take what I just wrote, expand and organize it, > then add such content to Documentation/cachetlb.txt >
diff -Nru linux/include/asm-mips/tlb.h.orig linux/include/asm-mips/tlb.h --- linux/include/asm-mips/tlb.h.orig Thu Oct 31 08:35:52 2002 +++ linux/include/asm-mips/tlb.h Thu Jan 15 10:02:14 2004 @@ -2,9 +2,14 @@ #define __ASM_TLB_H /* - * MIPS doesn't need any special per-pte or per-vma handling.. + * MIPS doesn't need any special per-pte or per-vma handling, except + * we need to flush cache for area to be unmapped. */ -#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_start_vma(tlb, vma) \ + do { \ + if (!tlb->fullmm) \ + flush_cache_range(vma, vma->vm_start, vma->vm_end); \ + } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)