[sorry, trying to deal with top-posting here] On Wed, Feb 21, 2018 at 07:36:34AM +0000, Wangxuefeng (E) wrote: > The old flow of reuse the 4k page as 2M page does not follow the BBM flow > for page table reconstruction,not only the memory leak problems. If BBM flow > is not followed,the speculative prefetch of tlb will made false tlb entries > cached in MMU, the false address will be got, panic will happen. If I understand Toshi's suggestion correctly, he's saying that the PMD can be cleared when unmapping the last PTE (like try_to_free_pte_page). In this case, there's no issue with the TLB because this is exactly BBM -- the PMD is cleared and TLB invalidation is issued before the PTE table is freed. A subsequent 2M map request will see an empty PMD and put down a block mapping. The downside is that freeing becomes more expensive as the last level table becomes more sparsely populated and you need to ensure you don't have any concurrent maps going on for the same table when you're unmapping. I also can't see a neat way to fit this into the current vunmap code. Perhaps we need an iounmap_page_range. In the meantime, the code in lib/ioremap.c looks totally broken so I think we should deselect CONFIG_HAVE_ARCH_HUGE_VMAP on arm64 until it's fixed. Will -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>