On Wed, 11 Aug 2021 06:34:46 +0100, Anshuman Khandual <anshuman.khandual@xxxxxxx> wrote: > > > > On 8/10/21 7:03 PM, Marc Zyngier wrote: > > On 2021-08-10 08:02, Anshuman Khandual wrote: > >> All instances here could just directly test against CONFIG_ARM64_XXK_PAGES > >> instead of evaluating via PAGE_SHIFT or PAGE_SIZE. With this change, there > >> will be no such usage left. > >> > >> Cc: Marc Zyngier <maz@xxxxxxxxxx> > >> Cc: James Morse <james.morse@xxxxxxx> > >> Cc: Alexandru Elisei <alexandru.elisei@xxxxxxx> > >> Cc: Suzuki K Poulose <suzuki.poulose@xxxxxxx> > >> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > >> Cc: Will Deacon <will@xxxxxxxxxx> > >> Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx > >> Cc: kvmarm@xxxxxxxxxxxxxxxxxxxxx > >> Cc: linux-kernel@xxxxxxxxxxxxxxx > >> Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> > >> --- > >> arch/arm64/kvm/hyp/pgtable.c | 6 +++--- > >> arch/arm64/mm/mmu.c | 2 +- > >> 2 files changed, 4 insertions(+), 4 deletions(-) > >> > >> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > >> index 05321f4165e3..a6112b6d6ef6 100644 > >> --- a/arch/arm64/kvm/hyp/pgtable.c > >> +++ b/arch/arm64/kvm/hyp/pgtable.c > >> @@ -85,7 +85,7 @@ static bool kvm_level_supports_block_mapping(u32 level) > >> * Reject invalid block mappings and don't bother with 4TB mappings for > >> * 52-bit PAs. > >> */ > >> - return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1)); > >> + return !(level == 0 || (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) && level == 1)); > >> } > >> > >> static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level) > >> @@ -155,7 +155,7 @@ static u64 kvm_pte_to_phys(kvm_pte_t pte) > >> { > >> u64 pa = pte & KVM_PTE_ADDR_MASK; > >> > >> - if (PAGE_SHIFT == 16) > >> + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) > >> pa |= FIELD_GET(KVM_PTE_ADDR_51_48, pte) << 48; > >> > >> return pa; > >> @@ -165,7 +165,7 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa) > >> { > >> kvm_pte_t pte = pa & KVM_PTE_ADDR_MASK; > >> > >> - if (PAGE_SHIFT == 16) > >> + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) > >> pte |= FIELD_PREP(KVM_PTE_ADDR_51_48, pa >> 48); > >> > >> return pte; > >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > >> index 9ff0de1b2b93..8fdfca179815 100644 > >> --- a/arch/arm64/mm/mmu.c > >> +++ b/arch/arm64/mm/mmu.c > >> @@ -296,7 +296,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, > >> unsigned long addr, > >> static inline bool use_1G_block(unsigned long addr, unsigned long next, > >> unsigned long phys) > >> { > >> - if (PAGE_SHIFT != 12) > >> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) > >> return false; > >> > >> if (((addr | next | phys) & ~PUD_MASK) != 0) > > > > I personally find it a lot less readable. > > > > Also, there is no evaluation whatsoever. All the code guarded > > by a PAGE_SIZE/PAGE_SHIFT that doesn't match the configuration > > is dropped at compile time. > > The primary idea here is to unify around IS_ENABLED(CONFIG_ARM64_XXK_PAGES) > usage in arm64, rather than having multiple methods to test page size when > ever required. I'm sorry, but I find the idiom extremely painful to parse. If you are annoyed with the 'PAGE_SHIFT == 12/14/16', consider replacing it with 'PAGE_SIZE == SZ_4/16/64K' instead. IS_ENABLED(CONFIG_ARM64_XXK_PAGES) also gives the wrong impression that *multiple* page sizes can be selected at any given time. That's obviously not the case, which actually makes PAGE_SIZE a much better choice. As things stand, I don't plan to take such a patch. Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm