Hi, Will
Thanks for your explain. If pmd cleared means making the pmd an invalid entry,that’ s no problem.
Thanks
Wangxuefeng
Sent from HUAWEI AnyOffice
Thanks for your explain. If pmd cleared means making the pmd an invalid entry,that’ s no problem.
Thanks
Wangxuefeng
Sent from HUAWEI AnyOffice
发件人:will.deacon
收件人:
抄送:toshi.kani,linux-arm-kernel,cpandya,linux-kernel,郭寒军,Linuxarm,linux-mm,akpm,mark.rutland,catalin.marinas,mhocko,hanjun.guo
时间:2018-02-21 19:58:15
主题:Re: 答复: [RFC patch] ioremap: don't set up huge I/O mappings when p4d/pud/pmd is zero
[sorry, trying to deal with top-posting here]
On Wed, Feb 21, 2018 at 07:36:34AM +0000, Wangxuefeng (E) wrote:
> The old flow of reuse the 4k page as 2M page does not follow the BBM flow
> for page table reconstruction,not only the memory leak problems. If BBM flow
> is not followed,the speculative prefetch of tlb will made false tlb entries
> cached in MMU, the false address will be got, panic will happen.
If I understand Toshi's suggestion correctly, he's saying that the PMD can
be cleared when unmapping the last PTE (like try_to_free_pte_page). In this
case, there's no issue with the TLB because this is exactly BBM -- the PMD
is cleared and TLB invalidation is issued before the PTE table is freed. A
subsequent 2M map request will see an empty PMD and put down a block
mapping.
The downside is that freeing becomes more expensive as the last level table
becomes more sparsely populated and you need to ensure you don't have any
concurrent maps going on for the same table when you're unmapping. I also
can't see a neat way to fit this into the current vunmap code. Perhaps we
need an iounmap_page_range.
In the meantime, the code in lib/ioremap.c looks totally broken so I think
we should deselect CONFIG_HAVE_ARCH_HUGE_VMAP on arm64 until it's fixed.
Will
On Wed, Feb 21, 2018 at 07:36:34AM +0000, Wangxuefeng (E) wrote:
> The old flow of reuse the 4k page as 2M page does not follow the BBM flow
> for page table reconstruction,not only the memory leak problems. If BBM flow
> is not followed,the speculative prefetch of tlb will made false tlb entries
> cached in MMU, the false address will be got, panic will happen.
If I understand Toshi's suggestion correctly, he's saying that the PMD can
be cleared when unmapping the last PTE (like try_to_free_pte_page). In this
case, there's no issue with the TLB because this is exactly BBM -- the PMD
is cleared and TLB invalidation is issued before the PTE table is freed. A
subsequent 2M map request will see an empty PMD and put down a block
mapping.
The downside is that freeing becomes more expensive as the last level table
becomes more sparsely populated and you need to ensure you don't have any
concurrent maps going on for the same table when you're unmapping. I also
can't see a neat way to fit this into the current vunmap code. Perhaps we
need an iounmap_page_range.
In the meantime, the code in lib/ioremap.c looks totally broken so I think
we should deselect CONFIG_HAVE_ARCH_HUGE_VMAP on arm64 until it's fixed.
Will