Re: Deadlock due to EPT_VIOLATION

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 30, 2023, Brian Rak wrote:
> 
> On 5/26/2023 5:02 PM, Sean Christopherson wrote:
> > On Fri, May 26, 2023, Brian Rak wrote:
> > > On 5/24/2023 9:39 AM, Brian Rak wrote:
> > > > On 5/23/2023 12:22 PM, Sean Christopherson wrote:
> > > > > The other thing that would be helpful would be getting kernel stack
> > > > > traces of the
> > > > > relevant tasks/threads.� The vCPU stack traces won't be interesting,
> > > > > but it'll
> > > > > likely help to see what the fallocate() tasks are doing.
> > > > I'll see what I can come up with here, I was running into some
> > > > difficulty getting useful stack traces out of the VM
> > > I didn't have any luck gathering guest-level stack traces - kaslr makes it
> > > pretty difficult even if I have the guest kernel symbols.
> > Sorry, I was hoping to get host stack traces, not guest stack traces.  I am hoping
> > to see what the fallocate() in the *host* is doing.
> 
> Ah - here's a different instance of it with a full backtrace from the host:

Gah, I wasn't specific enough again.  Though there's no longer an fallocate() for
any of the threads', so that's probably a moot point.  What I wanted to see is what
exactly the host kernel was doing, e.g. if something in the host memory management
was indirectly preventing vCPUs from making forward progress.  But that doesn't
seem to be the case here, and I would expect other problems if fallocate() was
stuck.  So ignore that request for now.

> > Another datapoint that might provide insight would be seeing if/how KVM's page
> > faults stats change, e.g. look at /sys/kernel/debug/kvm/pf_* multiple times when
> > the guest is stuck.
> 
> It looks like pf_taken is the only real one incrementing:

Drat.  That's what I expected, but it doesn't narrow down the search much.

> > Are you able to run modified host kernels?  If so, the easiest next step, assuming
> > stack traces don't provide a smoking gun, would be to add printks into the page
> > fault path to see why KVM is retrying instead of installing a SPTE.
> We can, but it can take quite some time from when we do the update to
> actually seeing results.� This problem is inconsistent at best, and even
> though we're seeing it a ton of times a day, it's can show up anywhere.�
> Even if we rolled it out today, we'd still be looking at weeks/months before
> we had any significant number of machines on it.

Would you be able to run a bpftrace program on a host with a stuck guest?  If so,
I believe I could craft a program for the kvm_exit tracepoint that would rule out
or confirm two of the three likely culprits.

Can you also dump the kvm.ko module params?  E.g. `tail /sys/module/kvm/parameters/*`




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux