On Wed, Jul 11, 2018 at 08:39:36PM +0200, Christian Borntraeger wrote: > > > On 07/11/2018 08:36 PM, Paul E. McKenney wrote: > > On Wed, Jul 11, 2018 at 11:20:53AM -0700, Paul E. McKenney wrote: > >> On Wed, Jul 11, 2018 at 07:01:01PM +0100, David Woodhouse wrote: > >>> From: David Woodhouse <dwmw@xxxxxxxxxxxx> > >>> > >>> RCU can spend long periods of time waiting for a CPU which is actually in > >>> KVM guest mode, entirely pointlessly. Treat it like the idle and userspace > >>> modes, and don't wait for it. > >>> > >>> Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> > >> > >> And idiot here forgot about some of the debugging code in RCU's dyntick-idle > >> code. I will reply with a fixed patch. > >> > >> The code below works just fine as long as you don't enable CONFIG_RCU_EQS_DEBUG, > >> so should be OK for testing, just not for mainline. > > > > And here is the updated code that allegedly avoids splatting when run with > > CONFIG_RCU_EQS_DEBUG. > > > > Thoughts? > > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > commit 12cd59e49cf734f907f44b696e2c6e4b46a291c3 > > Author: David Woodhouse <dwmw@xxxxxxxxxxxx> > > Date: Wed Jul 11 19:01:01 2018 +0100 > > > > kvm/x86: Inform RCU of quiescent state when entering guest mode > > > > RCU can spend long periods of time waiting for a CPU which is actually in > > KVM guest mode, entirely pointlessly. Treat it like the idle and userspace > > modes, and don't wait for it. > > > > Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> > > [ paulmck: Adjust to avoid bad advice I gave to dwmw, avoid WARN_ON()s. ] > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 0046aa70205a..b0c82f70afa7 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -7458,7 +7458,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD; > > } > > > > + rcu_kvm_enter(); > > kvm_x86_ops->run(vcpu); > > + rcu_kvm_exit(); > > As indicated in my other mail. This is supposed to be handled in the guest_enter|exit_ calls around > the run function. This would also handle other architectures. So if the guest_enter_irqoff code is > not good enough, we should rather fix that instead of adding another rcu hint. Something like this, on top of the earlier patch? I am not at all confident of this patch because there might be other entry/exit paths I am missing. Plus there might be RCU uses on the arch-specific patch to and from the guest OS. Thoughts? Thanx, Paul ------------------------------------------------------------------------ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b0c82f70afa7..0046aa70205a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7458,9 +7458,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD; } - rcu_kvm_enter(); kvm_x86_ops->run(vcpu); - rcu_kvm_exit(); /* * Do this here before restoring debug registers on the host. And diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index d05609ad329d..8d2a9d3073ad 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -118,12 +118,12 @@ static inline void guest_enter_irqoff(void) * one time slice). Lets treat guest mode as quiescent state, just like * we do with user-mode execution. */ - if (!context_tracking_cpu_is_enabled()) - rcu_virt_note_context_switch(smp_processor_id()); + rcu_kvm_enter(); } static inline void guest_exit_irqoff(void) { + rcu_kvm_exit(); if (context_tracking_is_enabled()) __context_tracking_exit(CONTEXT_GUEST); @@ -143,12 +143,13 @@ static inline void guest_enter_irqoff(void) */ vtime_account_system(current); current->flags |= PF_VCPU; - rcu_virt_note_context_switch(smp_processor_id()); + rcu_kvm_enter(); } static inline void guest_exit_irqoff(void) { /* Flush the guest cputime we spent on the guest */ + rcu_kvm_exit(); vtime_account_system(current); current->flags &= ~PF_VCPU; } diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 4b2d691e453f..a7aa5b3cfb81 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -81,7 +81,6 @@ static inline int rcu_needs_cpu(u64 basemono, u64 *nextevt) * Take advantage of the fact that there is only one CPU, which * allows us to ignore virtualization-based context switches. */ -static inline void rcu_virt_note_context_switch(int cpu) { } static inline void rcu_cpu_stall_reset(void) { } static inline void rcu_idle_enter(void) { } static inline void rcu_idle_exit(void) { } diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 48ce58b53ece..62b61e579bb4 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -34,17 +34,6 @@ void rcu_softirq_qs(void); void rcu_note_context_switch(bool preempt); int rcu_needs_cpu(u64 basem, u64 *nextevt); void rcu_cpu_stall_reset(void); - -/* - * Note a virtualization-based context switch. This is simply a - * wrapper around rcu_note_context_switch(), which allows TINY_RCU - * to save a few bytes. The caller must have disabled interrupts. - */ -static inline void rcu_virt_note_context_switch(int cpu) -{ - rcu_note_context_switch(false); -} - void synchronize_rcu_expedited(void); void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func);