On Sun, 25 Oct 2020 01:27:37 +0100, Gavin Shan <gshan@xxxxxxxxxx> wrote: > > The 52-bits physical address is disabled until CONFIG_ARM64_PA_BITS_52 > is chosen. This uses option for that check, to avoid the unconditional > check on PAGE_SHIFT in the hot path and thus save some CPU cycles. PAGE_SHIFT is known at compile time, and this code is dropped by the compiler if the selected page size is not 64K. This patch really only makes the code slightly less readable and the "CPU cycles" argument doesn't hold at all. So what are you trying to solve exactly? M. > > Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx> > --- > arch/arm64/kvm/hyp/pgtable.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 0cdf6e461cbd..fd850353ee89 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -132,8 +132,9 @@ static u64 kvm_pte_to_phys(kvm_pte_t pte) > { > u64 pa = pte & KVM_PTE_ADDR_MASK; > > - if (PAGE_SHIFT == 16) > - pa |= FIELD_GET(KVM_PTE_ADDR_51_48, pte) << 48; > +#ifdef CONFIG_ARM64_PA_BITS_52 > + pa |= FIELD_GET(KVM_PTE_ADDR_51_48, pte) << 48; > +#endif > > return pa; > } > @@ -142,8 +143,9 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa) > { > kvm_pte_t pte = pa & KVM_PTE_ADDR_MASK; > > - if (PAGE_SHIFT == 16) > - pte |= FIELD_PREP(KVM_PTE_ADDR_51_48, pa >> 48); > +#ifdef CONFIG_ARM64_PA_BITS_52 > + pte |= FIELD_PREP(KVM_PTE_ADDR_51_48, pa >> 48); > +#endif > > return pte; > } > -- > 2.23.0 > > -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm