On 3/15/2018 7:01 PM, Mark Rutland wrote:
On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote:
@@ -91,10 +93,15 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
if (ioremap_pmd_enabled() &&
((next - addr) == PMD_SIZE) &&
- IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
- pmd_free_pte_page(pmd)) {
- if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ IS_ALIGNED(phys_addr + addr, PMD_SIZE)) {
+ old_pmd = *pmd;
+ pmd_clear(pmd);
+ flush_tlb_pgtable(&init_mm, addr);
+ if (pmd_set_huge(pmd, phys_addr + addr, prot)) {
+ pmd_free_pte_page(&old_pmd);
continue;
+ } else
+ set_pmd(pmd, old_pmd);
}
Can we have something like a pmd_can_set_huge() helper? Then we could
avoid pointless modification and TLB invalidation work when
pmd_set_huge() will fail.
Actually, pmd_set_huge() will never fail because, if
CONFIG_HAVE_ARCH_HUGE_VMAP is disabled, ioremap_pmd_enabled()
will fail and if enabled (i.e. ARM64 & x86), they don't fail
in their implementation. So, rather we can do the following.
- if (pmd_set_huge(pmd, phys_addr + addr, prot)) {
- pmd_free_pte_page(&old_pmd);
- continue;
- } else
- set_pmd(pmd, old_pmd);
+ pmd_set_huge(pmd, phys_addr + addr, prot)
+ pmd_free_pte_page(&old_pmd);
+ continue;
if (ioremap_pmd_enabled() &&
((next - addr) == PMD_SIZE) &&
IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
pmd_can_set_huge(pmd, phys_addr + addr, prot)) {
// clear entries, invalidate TLBs, and free tables
...
continue;
}
Thanks,
MArk.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Chintan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center,
Inc. is a member of the Code Aurora Forum, a Linux Foundation
Collaborative Project