Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2014-09-12 at 09:12 -0500, Purcareata Bogdan-B43198 wrote:
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Thursday, September 11, 2014 9:19 PM
> > To: Purcareata Bogdan-B43198
> > Cc: kvm-ppc@xxxxxxxxxxxxxxx; kvm@xxxxxxxxxxxxxxx
> > Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
> > 
> > On Thu, 2014-09-11 at 15:25 -0400, Bogdan Purcareata wrote:
> > > This patch enables running intensive I/O workloads, e.g. netperf, in a guest
> > > deployed on a RT host. No change for !RT kernels.
> > >
> > > The openpic spinlock becomes a sleeping mutex on a RT system. This no longer
> > > guarantees that EPR is atomic with exception delivery. The guest VCPU thread
> > > fails due to a BUG_ON(preemptible()) when running netperf.
> > >
> > > In order to make the kvmppc_mpic_set_epr() call safe on RT from non-atomic
> > > context, convert the openpic lock to a raw_spinlock. A similar approach can
> > > be seen for x86 platforms in the following commit [1].
> > >
> > > Here are some comparative cyclitest measurements run inside a high priority
> > RT
> > > guest run on a RT host. The guest has 1 VCPU and the test has been run for
> > 15
> > > minutes. The guest runs ~750 hackbench processes as background stress.
> > 
> > Does hackbench involve triggering interrupts that would go through the
> > MPIC?  You may want to try an I/O-heavy benchmark to stress the MPIC
> > code (the more interrupt sources are active at once, the "better").
> 
> Before this patch, running netperf/iperf in the guest always resulted
> in hitting the afore-mentioned BUG_ON, when the host was RT. This is
> why I can't provide comparative cyclitest measurements before and after
> the patch, with heavy I/O stress. Since I had no problem running
> hackbench before, I'm assuming it doesn't involve interrupts passing
> through the MPIC. The measurements were posted just to show that the
> patch doesn't mess up anything somewhere else.

I know you can't provide before/after, but it would be nice to see what
the after numbers are with heavy MPIC activity.

> > Also try a guest with many vcpus.
> 
> AFAIK, without the MSI affinity patches [1], all vfio interrupts will
> go to core 0 in the guest. In this case, I guess there won't be
> contention induced latencies due to multiple VCPUs expecting to have
> their interrupts delivered. Am I getting it wrong?

It's not about contention, but about loops in the MPIC code that iterate
over the entire set of vcpus.

-Scott

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux