Re: [PATCH 2/2] KVM: Protect vCPU's "last run PID" with rwlock, not RCU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 06, 2024, Oliver Upton wrote:
> On Fri, Aug 02, 2024 at 01:01:36PM -0700, Sean Christopherson wrote:
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index a33f5996ca9f..7199cb014806 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1115,7 +1115,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
> >  void kvm_arm_halt_guest(struct kvm *kvm);
> >  void kvm_arm_resume_guest(struct kvm *kvm);
> >  
> > -#define vcpu_has_run_once(vcpu)	!!rcu_access_pointer((vcpu)->pid)
> > +#define vcpu_has_run_once(vcpu)	(!!READ_ONCE((vcpu)->pid))
> >  
> >  #ifndef __KVM_NVHE_HYPERVISOR__
> >  #define kvm_call_hyp_nvhe(f, ...)						\
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 689e8be873a7..d6f4e8b2b44c 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -342,7 +342,8 @@ struct kvm_vcpu {
> >  #ifndef __KVM_HAVE_ARCH_WQP
> >  	struct rcuwait wait;
> >  #endif
> > -	struct pid __rcu *pid;
> > +	struct pid *pid;
> > +	rwlock_t pid_lock;
> >  	int sigset_active;
> >  	sigset_t sigset;
> >  	unsigned int halt_poll_ns;
> 
> Adding yet another lock is never exciting, but this looks fine.

Heh, my feelings too.  Maybe that's why I didn't post this for two years.

> Can you nest this lock inside of the vcpu->mutex acquisition in
> kvm_vm_ioctl_create_vcpu() so lockdep gets the picture?

I don't think that's necessary.  Commit 42a90008f890 ("KVM: Ensure lockdep knows
about kvm->lock vs. vcpu->mutex ordering rule") added the lock+unlock in
kvm_vm_ioctl_create_vcpu() purely because actually taking vcpu->mutex inside
kvm->lock is rare, i.e. lockdep would be unable to detect issues except for very
specific VM types hitting very specific flows.

But for this lock, every arch is guaranteed to take the lock on the first KVM_RUN,
as "oldpid" is '0' and guaranteed to mismatch task_pid(current).  So I don't think
we go out of our way to alert lockdep.

> > @@ -4466,7 +4469,7 @@ static long kvm_vcpu_ioctl(struct file *filp,
> >  		r = -EINVAL;
> >  		if (arg)
> >  			goto out;
> > -		oldpid = rcu_access_pointer(vcpu->pid);
> > +		oldpid = vcpu->pid;
> 
> It'd be good to add a comment here about how this is guarded by the
> vcpu->mutex, as Steve points out.

Roger that.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux