Il 10/10/2010 18:46, Andi Kleen ha scritto: > This won't work at all on x86 because you don't handle large > pages. > > And it doesn't work on x86-64 because the first 2GB are double > mapped (direct and kernel text mapping) > > Thirdly I expect it won't either on architectures that map > the direct mapping with special registers (like IA64 or MIPS) Andi, what do you think to use the already implemented follow_pte instead? int writeable_kernel_pte_range(unsigned long address, unsigned long size, unsigned int rw) { unsigned long addr = address & PAGE_MASK; unsigned long end = address + size; unsigned long start = addr; int ret = -EINVAL; pte_t *ptep, pte; spinlock_t *lock = &init_mm.page_table_lock; do { ret = follow_pte(&init_mm, addr, &ptep, &lock); if (ret) goto out; pte = *ptep; if (pte_present(pte)) { pte = rw ? pte_mkwrite(pte) : pte_wrprotect(pte); *ptep = pte; } pte_unmap_unlock(ptep, lock); addr += PAGE_SIZE; } while (addr && (addr < end)); ret = 0; out: flush_tlb_kernel_range(start, end); return ret; } > > I'm not sure this is very useful anyways. It doesn't protect > against stray DMA and it doesn't protect against writes through > broken user PTEs. > > -Andi > It's a way to have more protection against kernel bug, for a in-memory fs can be important. However this option can be enabled/disabled at fs level. Marco -- To unsubscribe from this list: send the line "unsubscribe linux-embedded" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html