On Mon, Mar 15, 2021 at 02:35:29PM +0000, Quentin Perret wrote: > As the host stage 2 will be identity mapped, all the .hyp memory regions > and/or memory pages donated to protected guestis will have to marked > invalid in the host stage 2 page-table. At the same time, the hypervisor > will need a way to track the ownership of each physical page to ensure > memory sharing or donation between entities (host, guests, hypervisor) is > legal. > > In order to enable this tracking at EL2, let's use the host stage 2 > page-table itself. The idea is to use the top bits of invalid mappings > to store the unique identifier of the page owner. The page-table owner > (the host) gets identifier 0 such that, at boot time, it owns the entire > IPA space as the pgd starts zeroed. > > Provide kvm_pgtable_stage2_set_owner() which allows to modify the > ownership of pages in the host stage 2. It re-uses most of the map() > logic, but ends up creating invalid mappings instead. This impacts > how we do refcount as we now need to count invalid mappings when they > are used for ownership tracking. > > Signed-off-by: Quentin Perret <qperret@xxxxxxxxxx> > --- > arch/arm64/include/asm/kvm_pgtable.h | 21 +++++ > arch/arm64/kvm/hyp/pgtable.c | 127 ++++++++++++++++++++++----- > 2 files changed, 124 insertions(+), 24 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 4ae19247837b..683e96abdc24 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -238,6 +238,27 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, > u64 phys, enum kvm_pgtable_prot prot, > void *mc); > > +/** > + * kvm_pgtable_stage2_set_owner() - Annotate invalid mappings with metadata > + * encoding the ownership of a page in the > + * IPA space. The function does more than this, though, as it will also go ahead and unmap existing valid mappings which I think should be mentioned here, no? > +int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, > + void *mc, u8 owner_id) > +{ > + int ret; > + struct stage2_map_data map_data = { > + .phys = KVM_PHYS_INVALID, > + .mmu = pgt->mmu, > + .memcache = mc, > + .mm_ops = pgt->mm_ops, > + .owner_id = owner_id, > + }; > + struct kvm_pgtable_walker walker = { > + .cb = stage2_map_walker, > + .flags = KVM_PGTABLE_WALK_TABLE_PRE | > + KVM_PGTABLE_WALK_LEAF | > + KVM_PGTABLE_WALK_TABLE_POST, > + .arg = &map_data, > + }; > + > + if (owner_id > KVM_MAX_OWNER_ID) > + return -EINVAL; > + > + ret = kvm_pgtable_walk(pgt, addr, size, &walker); > + dsb(ishst); Why is the DSB needed here? afaict, we only ever unmap a valid entry (which will have a DSB as part of the TLBI sequence) or we update the owner for an existing invalid entry, in which case the walker doesn't care. Will _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm