On Thu, Oct 27, 2022, David Matlack wrote: > Add a new field to struct kvm that keeps track of the number of memslots > with dirty logging enabled. This will be used in a future commit to > cheaply check if any memslot is doing dirty logging. > > Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> > --- > include/linux/kvm_host.h | 2 ++ > virt/kvm/kvm_main.c | 10 ++++++++++ Why put this in common code? I'm having a hard time coming up with a second use case since the count isn't stable, i.e. it can't be used for anything except scenarios like x86's NX huge page mitigation where a false negative/positive is benign. > 2 files changed, 12 insertions(+) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 32f259fa5801..25ed8c1725ff 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -709,6 +709,8 @@ struct kvm { > struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2]; > /* The current active memslot set for each address space */ > struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; > + /* The number of memslots with dirty logging enabled. */ > + int nr_memslots_dirty_logging; I believe this can technically be a u16, as even with SMM KVM ensures the total number of memslots fits in a u16. A BUILD_BUG_ON() sanity check is probably a good idea regardless. > struct xarray vcpu_array; > > /* Used to wait for completion of MMU notifiers. */ > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index e30f1b4ecfa5..57e4406005cd 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -1641,6 +1641,9 @@ static void kvm_commit_memory_region(struct kvm *kvm, > const struct kvm_memory_slot *new, > enum kvm_mr_change change) > { > + int old_flags = old ? old->flags : 0; > + int new_flags = new ? new->flags : 0; Not that it really matters, but kvm_memory_slot.flags is a u32. > /* > * Update the total number of memslot pages before calling the arch > * hook so that architectures can consume the result directly. > @@ -1650,6 +1653,13 @@ static void kvm_commit_memory_region(struct kvm *kvm, > else if (change == KVM_MR_CREATE) > kvm->nr_memslot_pages += new->npages; > > + if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) { > + if (new_flags & KVM_MEM_LOG_DIRTY_PAGES) > + kvm->nr_memslots_dirty_logging++; > + else > + kvm->nr_memslots_dirty_logging--; A sanity check that KVM hasn't botched the count is probably a good idea. E.g. __kvm_set_memory_region() as a WARN_ON_ONCE() sanity check that KVM won't end up underflowing nr_memslot_pages. > + } > + > kvm_arch_commit_memory_region(kvm, old, new, change); > > switch (change) { > -- > 2.38.1.273.g43a17bfeac-goog >