On Wed, Dec 05, 2007 at 01:14:15PM -0800, Tony Lindgren wrote: > +static void omap_mmu_clear_pte(struct omap_mmu *mmu, unsigned long virt) > +{ > + pte_t *ptep, *end; > + pmd_t *pmdp; > + struct mm_struct *mm = mmu->twl_mm; > + > + spin_lock(&mm->page_table_lock); > + > + pmdp = pmd_offset(pgd_offset(mm, virt), virt); > + > + if (pmd_none(*pmdp)) > + goto out; This covers the all zero case. > + > + if (!pmd_table(*pmdp)) > + goto invalidate_pmd; This covers the least significant two bits not being '01'. However, the classic detection for non-page tables is to use pmd_bad(), or rather to use pmd_none_or_clear_bad(). > + > + ptep = pte_offset_kernel(pmdp, virt); > + pte_clear(mm, virt, ptep); > + flush_pmd_entry((pmd_t *)ptep); No. That's wrong, very wrong, and that cast just screams that it's wrong. Since pte_clear() calls set_pte_ext(), which calls the per-processor implementation, which in turn handles the cache issues, this flush is just not required. > + > + /* zap pte */ > + end = pmd_page_vaddr(*pmdp); > + ptep = end - PTRS_PER_PTE; > + while (ptep < end) { > + if (!pte_none(*ptep)) > + goto out; > + ptep++; > + } > + pte_free_kernel(pmd_page_vaddr(*pmdp)); So you free the PMD table. > + > + invalidate_pmd: > + pmd_clear(pmdp); > + flush_pmd_entry(pmdp); and then remove it from the page tables. Two problems - if the entries have been copied to another task, you'll be leaving the PMD table present there. Secondly, if you're using this with SMP it's hellishly racy. (which is why we don't support section mappings in ioremap on SMP.) Haven't finished reviewing the rest of the patch. - To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html