Re: [RFC PATCH v2 3/8] KVM: arm64: Add some HW_DBM related pgtable interfaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 22, 2023 at 04:24:11PM +0100, Catalin Marinas wrote:
> On Fri, Aug 25, 2023 at 10:35:23AM +0100, Shameer Kolothum wrote:
> > +static bool stage2_pte_writeable(kvm_pte_t pte)
> > +{
> > +	return pte & KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
> > +}
> > +
> > +static void kvm_update_hw_dbm(const struct kvm_pgtable_visit_ctx *ctx,
> > +			      kvm_pte_t new)
> > +{
> > +	kvm_pte_t old_pte, pte = ctx->old;
> > +
> > +	/* Only set DBM if page is writeable */
> > +	if ((new & KVM_PTE_LEAF_ATTR_HI_S2_DBM) && !stage2_pte_writeable(pte))
> > +		return;
> > +
> > +	/* Clear DBM walk is not shared, update */
> > +	if (!kvm_pgtable_walk_shared(ctx)) {
> > +		WRITE_ONCE(*ctx->ptep, new);
> > +		return;
> > +	}
> 
> I was wondering if this interferes with the OS dirty tracking (not the
> KVM one) but I think that's ok, at least at this point, since the PTE is
> already writeable and a fault would have marked the underlying page as
> dirty (user_mem_abort() -> kvm_set_pfn_dirty()).
> 
> I'm not particularly fond of relying on this but I need to see how it
> fits with the rest of the series. IIRC KVM doesn't go around and make
> Stage 2 PTEs read-only but rather unmaps them when it changes the
> permission of the corresponding Stage 1 VMM mapping.
> 
> My personal preference would be to track dirty/clean properly as we do
> for stage 1 (e.g. DBM means writeable PTE) but it has some downsides
> like the try_to_unmap() code having to retrieve the dirty state via
> notifiers.

KVM's usage of DBM is complicated by the fact that the dirty log
interface w/ userspace is at PTE granularity. We only want the page
table walker to relax PTEs, but take faults on hugepages so we can do
page splitting.

> Anyway, assuming this works correctly, it means that live migration via
> DBM is only tracked for PTEs already made dirty/writeable by some guest
> write.

I'm hoping that we move away from this combined write-protection and DBM
scheme and only use a single dirty tracking strategy at a time.

> > @@ -952,6 +990,11 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
> >  	    stage2_pte_executable(new))
> >  		mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule);
> >  
> > +	/* Save the possible hardware dirty info */
> > +	if ((ctx->level == KVM_PGTABLE_MAX_LEVELS - 1) &&
> > +	    stage2_pte_writeable(ctx->old))
> > +		mark_page_dirty(kvm_s2_mmu_to_kvm(pgt->mmu), ctx->addr >> PAGE_SHIFT);
> > +
> >  	stage2_make_pte(ctx, new);
> 
> Isn't this racy and potentially losing the dirty state? Or is the 'new'
> value guaranteed to have the S2AP[1] bit? For stage 1 we normally make
> the page genuinely read-only (clearing DBM) in a cmpxchg loop to
> preserve the dirty state (see ptep_set_wrprotect()).

stage2_try_break_pte() a few lines up does a cmpxchg() and full
break-before-make, so at this point there shouldn't be a race with
either software or hardware table walkers.

But I'm still confused by this one. KVM only goes down the map
walker path (in the context of dirty tracking) if:

 - We took a translation fault

 - We took a write permission fault on a hugepage and need to split

In both cases the 'old' translation should have DBM cleared. Even if the
PTE were dirty, this is wasted work since we need to do a final scan of
the stage-2 when userspace collects the dirty log.

Am I missing something?

-- 
Thanks,
Oliver



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux