I our defense we didn't know we were sinning at the time. Can you walk me through the cache flushing hole? How is it okay on X86 but not VIVT archs? I'm missing something obvious here. I thought earlier that vm_insert_mixed() handled the necessary flushing. Is that even the part you are worried about? vm_insert_mixed()->insert_pfn()->update_mmu_cache() _should_ handle the flush. Except of course now that I look at the ARM code it looks like it isn't doing anything if !pfn_valid(). <sigh> I need to spend some more time looking at this again. What flushing functions would you call if you did have a cache page. There are all kinds of cache flushing functions that work without a struct page. If nothing else the specialized ASM instructions that do the various flushes don't use struct page as a parameter. This isn't the first I've run into the lack of a sane cache API. Grep for inval_cache in the mtd drivers, should have been much easier. Isn't the proper solution to fix update_mmu_cache() or build out a pageless cache flushing API? I don't get the explicit mapping solution. What are you mapping where? What addresses would be SHMLBA? Phys, kernel, userspace? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html