Re: [PATCH v6 11/12] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 12, 2023 at 01:01:26PM +0000, Marc Zyngier wrote:
> On Tue, 07 Mar 2023 03:45:54 +0000,
> Ricardo Koller <ricarkol@xxxxxxxxxx> wrote:
> > 
> > This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu:
> > Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"),
> > which has the benefit of splitting the cost of splitting a memslot
> > across multiple ioctls.
> > 
> > Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG.
> > And do not split when enabling dirty logging if
> > KVM_DIRTY_LOG_INITIALLY_SET is set.
> > 
> > Signed-off-by: Ricardo Koller <ricarkol@xxxxxxxxxx>
> > ---
> >  arch/arm64/kvm/mmu.c | 15 ++++++++++++---
> >  1 file changed, 12 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 910aea6bbd1e..d54223b5db97 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1089,8 +1089,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot)
> >   * @mask:	The mask of pages at offset 'gfn_offset' in this memory
> >   *		slot to enable dirty logging on
> >   *
> > - * Writes protect selected pages to enable dirty logging for them. Caller must
> > - * acquire kvm->mmu_lock.
> > + * Splits selected pages to PAGE_SIZE and then writes protect them to enable
> > + * dirty logging for them. Caller must acquire kvm->mmu_lock.
> 
> The code does things in the opposite order...

Fixed the comment.

> 
> >   */
> >  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> >  		struct kvm_memory_slot *slot,
> > @@ -1103,6 +1103,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> >  	lockdep_assert_held_write(&kvm->mmu_lock);
> >  
> >  	stage2_wp_range(&kvm->arch.mmu, start, end);
> > +
> > +	/*
> > +	 * If initially-all-set mode is not set, then huge-pages were already
> > +	 * split when enabling dirty logging: no need to do it again.
> > +	 */
> > +	if (kvm_dirty_log_manual_protect_and_init_set(kvm))
> 
> This contradicts the comment. Which one is correct?a

Changed the comment.

> 
> > +		kvm_mmu_split_huge_pages(kvm, start, end);
> >  }
> >  
> >  static void kvm_send_hwpoison_signal(unsigned long address, short lsb)
> > @@ -1889,7 +1896,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> >  		 * this when deleting, moving, disabling dirty logging, or
> >  		 * creating the memslot (a nop). Doing it for deletes makes
> >  		 * sure we don't leak memory, and there's no need to keep the
> > -		 * cache around for any of the other cases.
> > +		 * cache around for any of the other cases. Keeping the cache
> > +		 * is useful for successive KVM_CLEAR_DIRTY_LOG calls, which is
> > +		 * not handled in this function.
> 
> Where is it handled then?

This last sentence doesn't make much sense, so I removed it. CLEAR calls
don't even go through this function.

> 
> >  		 */
> >  		kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache);
> >  	}
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux