I believe the following change will fix the cache/TLB inconsistency observed by Meelis. After changing the page table entries, we need to flush the cache and TLB to ensure that we don't have any stale PTE values in the cache or TLB. The alternative patching is done after all CPUs are running. Thus, we need to flush the whole cache and TLB. I included the init section in the range modified by map_pages as suggested by Helge. Some routines in the init section may require patching. diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index e7e626bcd0be..f88a52b8531c 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -513,17 +513,15 @@ static void __init map_pages(unsigned long start_vaddr, void __init set_kernel_text_rw(int enable_read_write) { - unsigned long start = (unsigned long)_stext; + unsigned long start = (unsigned long)__init_begin; unsigned long end = (unsigned long)_etext; map_pages(start, __pa(start), end-start, PAGE_KERNEL_RWX, enable_read_write ? 1:0); - /* force the kernel to see the new TLB entries */ - __flush_tlb_range(0, start, end); - - /* dump old cached instructions */ - flush_icache_range(start, end); + /* force the kernel to see the new page table entries */ + flush_cache_all(); + flush_tlb_all(); } void __ref free_initmem(void) Signed-off-by: John David Anglin <dave.anglin@xxxxxxxx>
Attachment:
signature.asc
Description: PGP signature