On Mon, Jan 10, 2022, Raghavendra Rao Ananta wrote: > On Fri, Jan 7, 2022 at 5:06 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > On Tue, Jan 04, 2022, Raghavendra Rao Ananta wrote: > > > +#define kvm_vm_has_started(kvm) (kvm->vm_started) > > > > Needs parantheses around (kvm), but why bother with a macro? This is the same > > header that defines struct kvm. > > > No specific reason for creating a macro as such. I can remove it if it > feels noisy. Please do. In the future, don't use a macro unless there's a good reason to do so. Don't get me wrong, I love abusing macros, but for things like this they are completely inferior to static inline bool kvm_vm_has_started(struct kvm *kvm) { return kvm->vm_started; } because a helper function gives us type safety, doesn't suffer from concatenation of tokens potentially doing weird things, is easier to extend to a multi-line implementation, etc... An example of when it's ok to use a macro is x86's #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) which uses a macro instead of a proper function to avoid a circular dependency due to arch/x86/include/asm/kvm_host.h being included by include/linux/kvm_host.h and thus x86's implementation of kvm_arch_vcpu_memslots_id() coming before the definition of struct kvm_vcpu. But that's very much an exception and done only because the alternatives suck more. > > > + */ > > > + mutex_lock(&kvm->lock); > > > > This adds unnecessary lock contention when running vCPUs. The naive solution > > would be: > > if (!kvm->vm_started) { > > ... > > } > > > Not sure if I understood the solution.. In your proposed patch, KVM_RUN will take kvm->lock _every_ time. That introduces unnecessary contention as it will serialize this bit of code if multiple vCPUs are attempting KVM_RUN. By checking !vm_started, only the "first" KVM_RUN for a VM will acquire kvm->lock and thus avoid contention once the VM is up and running. There's still a possibility that multiple vCPUs will contend for kvm->lock on their first KVM_RUN, hence the quotes. I called it "naive" because it's possible there's a more elegant solution depending on the use case, e.g. a lockless approach might work (or it might not). > > > + kvm->vm_started = true; > > > + mutex_unlock(&kvm->lock); > > > > Lastly, why is this in generic KVM? > > > The v1 of the series originally had it in the arm specific code. > However, I was suggested to move it to the generic code since the book > keeping is not arch specific and could be helpful to others too [1]. I'm definitely in favor of moving/adding thing to generic KVM when it makes sense, but I'm skeptical in this particular case. The code _is_ arch specific in that arm64 apparently needs to acquire kvm->lock when checking if a vCPU has run, e.g. versus a hypothetical x86 use case that might be completely ok with a lockless implementation. And it's not obvious that there's a plausible, safe use case outside of arm64, e.g. on x86, there is very, very little that is truly shared across the entire VM/system, most things are per-thread/core/package in some way, shape, or form. In other words, I'm a wary of providing something like this for x86 because odds are good that any use will be functionally incorrect. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm