On Wed, Feb 10, 2021, Makarand Sonare wrote: > @@ -7517,9 +7531,39 @@ static void vmx_slot_enable_log_dirty(struct kvm *kvm, > static void vmx_slot_disable_log_dirty(struct kvm *kvm, > struct kvm_memory_slot *slot) > { > + /* > + * Check all slots and disable PML if dirty logging > + * is being disabled for the last slot > + * > + */ > + if (enable_pml && > + kvm->dirty_logging_enable_count == 0 && > + kvm->arch.pml_enabled) { > + kvm->arch.pml_enabled = false; > + kvm_make_all_cpus_request(kvm, > + KVM_REQ_UPDATE_VCPU_DIRTY_LOGGING_STATE); > + } > + > kvm_mmu_slot_set_dirty(kvm, slot); > } ... > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index ee4ac2618ec59..c6e5b026bbfe8 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -307,6 +307,7 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) > { > return kvm_make_all_cpus_request_except(kvm, req, NULL); > } > +EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); If we move enable_pml into x86.c then this export and several of the kvm_x86_ops go away. I know this because I have a series I was about to send that does that, among several other things. I suspect that kvm->arch.pml_enabled could also go away, but that's just a guess. Anyways, I'll work with you off-list to figure out a plan. The easiest thing is probably for me to tack it on to the end of my series. I completely spaced on the fact that my series would conflict with this code, sorry :-/