On Sun, Apr 10, 2022 at 11:52 PM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > On Fri, 08 Apr 2022 21:03:24 +0100, > Kalesh Singh <kaleshsingh@xxxxxxxxxx> wrote: > > > > hyp_alloc_private_va_range() can be used to reserve private VA ranges > > in the nVHE hypervisor. Allocations are aligned based on the order of > > the requested size. > > > > This will be used to implement stack guard pages for KVM nVHE hypervisor > > (nVHE Hyp mode / not pKVM), in a subsequent patch in the series. > > > > Signed-off-by: Kalesh Singh <kaleshsingh@xxxxxxxxxx> > > Tested-by: Fuad Tabba <tabba@xxxxxxxxxx> > > Reviewed-by: Fuad Tabba <tabba@xxxxxxxxxx> > > --- > > > > Changes in v7: > > - Add Fuad's Reviewed-by and Tested-by tags. > > > > Changes in v6: > > - Update kernel-doc for hyp_alloc_private_va_range() > > and add return description, per Stephen > > - Update hyp_alloc_private_va_range() to return an int error code, > > per Stephen > > - Replace IS_ERR() checks with IS_ERR_VALUE() check, per Stephen > > - Clean up goto, per Stephen > > > > Changes in v5: > > - Align private allocations based on the order of their size, per Marc > > > > Changes in v4: > > - Handle null ptr in hyp_alloc_private_va_range() and replace > > IS_ERR_OR_NULL checks in callers with IS_ERR checks, per Fuad > > - Fix kernel-doc comments format, per Fuad > > > > Changes in v3: > > - Handle null ptr in IS_ERR_OR_NULL checks, per Mark > > > > > > arch/arm64/include/asm/kvm_mmu.h | 1 + > > arch/arm64/kvm/mmu.c | 66 +++++++++++++++++++++----------- > > 2 files changed, 45 insertions(+), 22 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > > index 74735a864eee..a50cbb5ba402 100644 > > --- a/arch/arm64/include/asm/kvm_mmu.h > > +++ b/arch/arm64/include/asm/kvm_mmu.h > > @@ -154,6 +154,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) > > int kvm_share_hyp(void *from, void *to); > > void kvm_unshare_hyp(void *from, void *to); > > int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); > > +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr); > > int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, > > void __iomem **kaddr, > > void __iomem **haddr); > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index 0d19259454d8..3d3efea4e991 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > @@ -457,23 +457,22 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) > > return 0; > > } > > > > -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, > > - unsigned long *haddr, > > - enum kvm_pgtable_prot prot) > > + > > +/** > > + * hyp_alloc_private_va_range - Allocates a private VA range. > > + * @size: The size of the VA range to reserve. > > + * @haddr: The hypervisor virtual start address of the allocation. > > + * > > + * The private virtual address (VA) range is allocated below io_map_base > > + * and aligned based on the order of @size. > > + * > > + * Return: 0 on success or negative error code on failure. > > + */ > > +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr) > > { > > unsigned long base; > > int ret = 0; > > > > - if (!kvm_host_owns_hyp_mappings()) { > > - base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, > > - phys_addr, size, prot); > > - if (IS_ERR_OR_NULL((void *)base)) > > - return PTR_ERR((void *)base); > > - *haddr = base; > > - > > - return 0; > > - } > > - > > mutex_lock(&kvm_hyp_pgd_mutex); > > > > /* > > @@ -484,30 +483,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, > > * > > * The allocated size is always a multiple of PAGE_SIZE. > > */ > > - size = PAGE_ALIGN(size + offset_in_page(phys_addr)); > > - base = io_map_base - size; > > + base = io_map_base - PAGE_ALIGN(size); > > + > > + /* Align the allocation based on the order of its size */ > > + base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size)); > > > > /* > > * Verify that BIT(VA_BITS - 1) hasn't been flipped by > > * allocating the new area, as it would indicate we've > > * overflowed the idmap/IO address range. > > */ > > - if ((base ^ io_map_base) & BIT(VA_BITS - 1)) > > + if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1)) > > I don't get this '!base' check. Why isn't it encompassed by the > 'VA_BITS - 1' flip check? Hi Marc, You're right. The flip check handles this as well. I’ll drop in the next version. > > > ret = -ENOMEM; > > else > > - io_map_base = base; > > + *haddr = io_map_base = base; > > > > mutex_unlock(&kvm_hyp_pgd_mutex); > > > > + return ret; > > +} > > + > > +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, > > + unsigned long *haddr, > > + enum kvm_pgtable_prot prot) > > +{ > > + unsigned long addr; > > + int ret = 0; > > + > > + if (!kvm_host_owns_hyp_mappings()) { > > + addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, > > + phys_addr, size, prot); > > + if (IS_ERR_VALUE(addr)) > > + return addr; > > + *haddr = addr; > > + > > + return 0; > > + } > > + > > + size += offset_in_page(phys_addr); > > This hardly makes any sense on its own. I get it that it is still > doing the right thing as hyp_alloc_private_va_range() will fix it up, > but I'd rather you keep the PAGE_ALIGN() here, even if it ends up > being duplicated. Ack Thanks, Kalesh > > > + ret = hyp_alloc_private_va_range(size, &addr); > > if (ret) > > - goto out; > > + return ret; > > > > - ret = __create_hyp_mappings(base, size, phys_addr, prot); > > + ret = __create_hyp_mappings(addr, size, phys_addr, prot); > > if (ret) > > - goto out; > > + return ret; > > > > - *haddr = base + offset_in_page(phys_addr); > > -out: > > + *haddr = addr + offset_in_page(phys_addr); > > return ret; > > } > > > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm