Re: [PATCH v3 04/12] KVM: arm64: Add kvm_pgtable_stage2_split()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 16, 2023 at 10:07 AM Ricardo Koller <ricarkol@xxxxxxxxxx> wrote:
>
> On Wed, Feb 15, 2023 at 4:36 PM Oliver Upton <oliver.upton@xxxxxxxxx> wrote:
> >
> > On Wed, Feb 15, 2023 at 05:40:38PM +0000, Ricardo Koller wrote:
> >
> > [...]
> >
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > index fed314f2b320..e2fb78398b3d 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -1229,6 +1229,111 @@ int kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt,
> > >       return 0;
> > >  }
> > >
> > > +struct stage2_split_data {
> > > +     struct kvm_s2_mmu               *mmu;
> > > +     void                            *memcache;
> > > +     u64                             mc_capacity;
> > > +};
> > > +
> > > +/*
> > > + * Get the number of page-tables needed to replace a bock with a fully
> > > + * populated tree, up to the PTE level, at particular level.
> > > + */
> > > +static inline u32 stage2_block_get_nr_page_tables(u32 level)
> > > +{
> > > +     switch (level) {
> > > +     /* There are no blocks at level 0 */
> > > +     case 1: return 1 + PTRS_PER_PTE;
> > > +     case 2: return 1;
> > > +     case 3: return 0;
> > > +     default:
> > > +             WARN_ON_ONCE(1);
> > > +             return ~0;
> > > +     }
> > > +}
> >
> > This doesn't take into account our varying degrees of hugepage support
> > across page sizes. Perhaps:
> >
> >   static inline int stage2_block_get_nr_page_tables(u32 level)
> >   {
> >           if (WARN_ON_ONCE(level < KVM_PGTABLE_MIN_BLOCK_LEVEL ||
> >                            level >= KVM_PGTABLE_MAX_LEVELS))
> >                   return -EINVAL;
> >
> >           switch (level) {
> >           case 1:
> >                 return PTRS_PER_PTE + 1;
> >           case 2:
> >                 return 1;
> >           case 3:
> >                 return 0;
> >           }
> >   }
> >
> > paired with an explicit error check and early return on the caller side.
>
> Sounds good, will add this to the next version.
>
> >
> > > +static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
> > > +                            enum kvm_pgtable_walk_flags visit)
> > > +{
> > > +     struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> > > +     struct stage2_split_data *data = ctx->arg;
> > > +     kvm_pte_t pte = ctx->old, new, *childp;
> > > +     enum kvm_pgtable_prot prot;
> > > +     void *mc = data->memcache;
> > > +     u32 level = ctx->level;
> > > +     u64 phys, nr_pages;
> > > +     bool force_pte;
> > > +     int ret;
> > > +
> > > +     /* No huge-pages exist at the last level */
> > > +     if (level == KVM_PGTABLE_MAX_LEVELS - 1)
> > > +             return 0;
> > > +
> > > +     /* We only split valid block mappings */
> > > +     if (!kvm_pte_valid(pte))
> > > +             return 0;
> > > +
> > > +     nr_pages = stage2_block_get_nr_page_tables(level);
> > > +     if (data->mc_capacity >= nr_pages) {
> > > +             /* Build a tree mapped down to the PTE granularity. */
> > > +             force_pte = true;
> > > +     } else {
> > > +             /*
> > > +              * Don't force PTEs. This requires a single page of PMDs at the
> > > +              * PUD level, or a single page of PTEs at the PMD level. If we
> > > +              * are at the PUD level, the PTEs will be created recursively.
> > > +              */
> > > +             force_pte = false;
> > > +             nr_pages = 1;
> > > +     }
> >
> > Do we know if the 'else' branch here is even desirable? I.e. has
> > recursive shattering been tested with PUD hugepages (HugeTLB 1G) and
> > shown to improve guest performance while dirty tracking?
>
> Yes, I think it's desirable. Here are some numbers on a neoverse n1 using
> dirty_log_perf_test (152 vcpus, 1G each, 4K pages):
>
> CHUNK_SIZE=1G
> Enabling dirty logging time: 2.468014046s
> Iteration 1 dirty memory time: 4.275447900s
>
> CHUNK_SIZE=2M
> Enabling dirty logging time: 2.692124099s
> Iteration 1 dirty memory time: 4.284682220s
>
> Enabling dirty logging increases as expected when using a smaller CHUNK_SIZE,
> but not by too much (~9%). It's a fair tradeoff for users not willing
> to allocate large
> caches.
>
> >
> > The observations we've made on existing systems were that the successive
> > break-before-make operations led to a measurable slowdown in guest
> > pre-copy performance. Recursively building the page tables should
> > actually result in *more* break-before-makes than if we just let the vCPU
> > fault path lazily shatter hugepages.
> >
> > --
> > Thanks,
> > Oliver

There is a terribly offensive image that was attached to the previous
email. I would like to apologize for
attaching it. I don't know how that happened.

Ricardo



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux