On 2020-09-10 11:51, Will Deacon wrote:
On Wed, Sep 09, 2020 at 06:12:29PM +0100, Marc Zyngier wrote:
On 2020-09-09 15:20, Alexandru Elisei wrote:
> On 9/7/20 4:23 PM, Will Deacon wrote:
> > @@ -1610,62 +1605,31 @@ static int user_mem_abort(struct kvm_vcpu
> > *vcpu, phys_addr_t fault_ipa,
> > if (vma_pagesize == PAGE_SIZE && !force_pte)
> > vma_pagesize = transparent_hugepage_adjust(memslot, hva,
> > &pfn, &fault_ipa);
> > - if (writable)
> > + if (writable) {
> > + prot |= KVM_PGTABLE_PROT_W;
> > kvm_set_pfn_dirty(pfn);
> > + mark_page_dirty(kvm, gfn);
>
> The previous code called mark_page_dirty() only if the vma_pagesize ==
> PAGE_SIZE
> (and writable was true, obviously). Is this supposed to fix a bug?
No, this is actually introducing one. mark_page_dirty() checks that
there is
an
associated bitmap, and thus only happens when writing to a single
page, but
we
shouldn't do it for R/O memslots, which the current code avoids. It
should
be
guarded by logging_active.
gfn_to_pfn_prot() will set "writable" to false for R/O memslots, so I
think
we're good here.
Ah, good point.
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm