On Tue, 07 Mar 2023 03:45:46 +0000, Ricardo Koller <ricarkol@xxxxxxxxxx> wrote: > > Add a stage2 helper, kvm_pgtable_stage2_create_unlinked(), for > creating unlinked tables (which is the opposite of > kvm_pgtable_stage2_free_unlinked()). Creating an unlinked table is > useful for splitting PMD and PUD blocks into subtrees of PAGE_SIZE Please drop the PMD/PUD verbiage. That's specially confusing when everything is described in terms of 'level' > PTEs. For example, a PUD can be split into PAGE_SIZE PTEs by first for example: s/a PUD/a level 1 mapping/ > creating a fully populated tree, and then use it to replace the PUD in > a single step. This will be used in a subsequent commit for eager > huge-page splitting (a dirty-logging optimization). > > No functional change intended. This new function will be used in a > subsequent commit. Drop this last sentence, it doesn't say anything that you haven't already said. > > Signed-off-by: Ricardo Koller <ricarkol@xxxxxxxxxx> > Reviewed-by: Shaoqin Huang <shahuang@xxxxxxxxxx> > --- > arch/arm64/include/asm/kvm_pgtable.h | 28 +++++++++++++++++ > arch/arm64/kvm/hyp/pgtable.c | 46 ++++++++++++++++++++++++++++ > 2 files changed, 74 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index c7a269cad053..b7b3fc0fa7a5 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -468,6 +468,34 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > */ > void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); > > +/** > + * kvm_pgtable_stage2_create_unlinked() - Create an unlinked stage-2 paging structure. > + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > + * @phys: Physical address of the memory to map. > + * @level: Starting level of the stage-2 paging structure to be created. > + * @prot: Permissions and attributes for the mapping. > + * @mc: Cache of pre-allocated and zeroed memory from which to allocate > + * page-table pages. > + * @force_pte: Force mappings to PAGE_SIZE granularity. > + * > + * Returns an unlinked page-table tree. If @force_pte is true or > + * @level is 2 (the PMD level), then the tree is mapped up to the > + * PAGE_SIZE leaf PTE; the tree is mapped up one level otherwise. I wouldn't make this "one level" assumption, as this really depends on the size of what gets mapped (and future evolution of this code). > + * This new page-table tree is not reachable (i.e., it is unlinked) > + * from the root pgd and it's therefore unreachableby the hardware > + * page-table walker. No TLB invalidation or CMOs are performed. > + * > + * If device attributes are not explicitly requested in @prot, then the > + * mapping will be normal, cacheable. > + * > + * Return: The fully populated (unlinked) stage-2 paging structure, or > + * an ERR_PTR(error) on failure. What guarantees that this new unlinked structure is kept in sync with the original one? AFAICT, nothing does. > + */ > +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, > + u64 phys, u32 level, > + enum kvm_pgtable_prot prot, > + void *mc, bool force_pte); > + > /** > * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. > * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 4f703cc4cb03..6bdfcb671b32 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -1212,6 +1212,52 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) > return kvm_pgtable_walk(pgt, addr, size, &walker); > } > > +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, > + u64 phys, u32 level, > + enum kvm_pgtable_prot prot, > + void *mc, bool force_pte) > +{ > + struct stage2_map_data map_data = { > + .phys = phys, > + .mmu = pgt->mmu, > + .memcache = mc, > + .force_pte = force_pte, > + }; > + struct kvm_pgtable_walker walker = { > + .cb = stage2_map_walker, > + .flags = KVM_PGTABLE_WALK_LEAF | > + KVM_PGTABLE_WALK_SKIP_BBM | > + KVM_PGTABLE_WALK_SKIP_CMO, > + .arg = &map_data, > + }; > + /* .addr (the IPA) is irrelevant for an unlinked table */ > + struct kvm_pgtable_walk_data data = { > + .walker = &walker, > + .addr = 0, Is that always true? What if the caller expect a non-block-aligned mapping? You should at least check that phys is aligned to the granule size of 'level', or bad stuff may happen. > + .end = kvm_granule_size(level), > + }; > + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; > + kvm_pte_t *pgtable; > + int ret; > + > + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); > + if (ret) > + return ERR_PTR(ret); > + > + pgtable = mm_ops->zalloc_page(mc); > + if (!pgtable) > + return ERR_PTR(-ENOMEM); > + > + ret = __kvm_pgtable_walk(&data, mm_ops, (kvm_pteref_t)pgtable, > + level + 1); > + if (ret) { > + kvm_pgtable_stage2_free_unlinked(mm_ops, pgtable, level); > + mm_ops->put_page(pgtable); > + return ERR_PTR(ret); > + } > + > + return pgtable; > +} > > int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops, M. -- Without deviation from the norm, progress is not possible.