Re: Question about lock_all_vcpus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/6/25 21:08, Maxim Levitsky wrote:
Do you think that it's possible? or know if there were any efforts to get rid of lock_all_vcpus to avoid this problem? If not possible, maybe we can exclude the lock_all_vcpus from the lockdep validator?

AFAIK, on x86 most of the similar cases where lock_all_vcpus could be used are handled by assuming and enforcing that userspace will call these functions prior to first vCPU is created an/or run, thus the need for such locking doesn't exist.

The way x86 handles something like lock_all_vcpus() is in function
sev_lock_vcpus_for_migration(), where all vCPUs from the same VM are
collapsed into a single lock key.

This works because you know that multiple vCPU mutexes are only nested
while kvm->lock is held as well.  Since that's the case also for ARM's
lock_all_vcpus(), perhaps sev_lock_vcpus_for_migration() and sev_unlock_vcpus_for_migration() could be moved to virt/kvm/kvm_main.c (and renamed to kvm_{lock,unlock}_all_vcpus_nested(); with another function that lacks the _nested suffix and hardcodes that argument to 0).

RISC-V also has a copy of lock_all_vcpus() and it also has the kvm->lock around it thanks to kvm_ioctl_create_device(); so it can use the same generic function, too.

Paolo

Recently x86 got a lot of cleanups to enforce this, like for example enforce that userspace won't change CPUID after a vCPU has run.

Best regards, Maxim Levitsky







[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux