Re: [PATCH v2 6/8] kvm tools: Add rwlock wrapper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 03, 2011 at 10:56:20PM +0300, Sasha Levin wrote:
> On Fri, 2011-06-03 at 12:31 -0700, Paul E. McKenney wrote:
> > On Fri, Jun 03, 2011 at 10:54:19AM +0300, Sasha Levin wrote:
> > > On Fri, 2011-06-03 at 09:34 +0200, Ingo Molnar wrote:
> > > > * Sasha Levin <levinsasha928@xxxxxxxxx> wrote:
> > > > 
> > > > > > with no apparent progress being made.
> > > > > 
> > > > > Since it's something that worked in 2.6.37, I've looked into it to 
> > > > > find what might have caused this issue.
> > > > > 
> > > > > I've bisected guest kernels and found that the problem starts with:
> > > > > 
> > > > > a26ac2455ffcf3be5c6ef92bc6df7182700f2114 is the first bad commit
> > > > > commit a26ac2455ffcf3be5c6ef92bc6df7182700f2114
> > > > > Author: Paul E. McKenney <paul.mckenney@xxxxxxxxxx>
> > > > > Date:   Wed Jan 12 14:10:23 2011 -0800
> > > > > 
> > > > >     rcu: move TREE_RCU from softirq to kthread
> > > > > 
> > > > > Ingo, could you confirm that the problem goes away for you when you 
> > > > > use an earlier commit?
> > > > 
> > > > testing will have to wait, but there's a recent upstream fix:
> > > > 
> > > >   d72bce0e67e8: rcu: Cure load woes
> > > > 
> > > > That *might* perhaps address this problem too.
> > > > 
> > > I've re-tested with Linus's current git, the problem is still there.
> > > 
> > > > If not then this appears to be some sort of RCU related livelock with 
> > > > brutally overcommitted vcpus. On native this would show up too, in a 
> > > > less drastic form, as a spurious bootup delay.
> > > 
> > > I don't think it was overcommited by *that* much. With that commit it
> > > usually hangs at 20-40 vcpus, while without it I can go up to 255.
> > 
> > Here is a diagnostic patch, untested.  It assumes that your system
> > has only a few CPUs (maybe 8-16) and that timers are still running.
> > It dumps out some RCU state if grace periods extend for more than
> > a few seconds.
> > 
> > To activate it, call rcu_diag_timer_start() from process context.
> > To stop it, call rcu_diag_timer_stop(), also from process context.
> 
> Since the hang happens in guest kernel very early during boot, I can't
> call rcu_diag_timer_start(). What would be a good place to put the
> _start() code instead?

Assuming that the failure occurs in the host OS rather than in the guest
OS, I suggest placing rcu_diag_timer_start() in the host code that starts
up the guest.

On the other hand, if the failure is occuring in the guest OS, then
I suggest placing the call to rcu_diag_timer_start() just after timer
initialization -- that is, assuming that interrupts are enabled at the
time of the failure.  If interrupts are not yet enabled at the time of
the failure, color me clueless.

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux