RE: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Thursday, September 11, 2014 9:19 PM
> To: Purcareata Bogdan-B43198
> Cc: kvm-ppc@xxxxxxxxxxxxxxx; kvm@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
> 
> On Thu, 2014-09-11 at 15:25 -0400, Bogdan Purcareata wrote:
> > This patch enables running intensive I/O workloads, e.g. netperf, in a guest
> > deployed on a RT host. No change for !RT kernels.
> >
> > The openpic spinlock becomes a sleeping mutex on a RT system. This no longer
> > guarantees that EPR is atomic with exception delivery. The guest VCPU thread
> > fails due to a BUG_ON(preemptible()) when running netperf.
> >
> > In order to make the kvmppc_mpic_set_epr() call safe on RT from non-atomic
> > context, convert the openpic lock to a raw_spinlock. A similar approach can
> > be seen for x86 platforms in the following commit [1].
> >
> > Here are some comparative cyclitest measurements run inside a high priority
> RT
> > guest run on a RT host. The guest has 1 VCPU and the test has been run for
> 15
> > minutes. The guest runs ~750 hackbench processes as background stress.
> 
> Does hackbench involve triggering interrupts that would go through the
> MPIC?  You may want to try an I/O-heavy benchmark to stress the MPIC
> code (the more interrupt sources are active at once, the "better").

Before this patch, running netperf/iperf in the guest always resulted in hitting the afore-mentioned BUG_ON, when the host was RT. This is why I can't provide comparative cyclitest measurements before and after the patch, with heavy I/O stress. Since I had no problem running hackbench before, I'm assuming it doesn't involve interrupts passing through the MPIC. The measurements were posted just to show that the patch doesn't mess up anything somewhere else.

> Also try a guest with many vcpus.

AFAIK, without the MSI affinity patches [1], all vfio interrupts will go to core 0 in the guest. In this case, I guess there won't be contention induced latencies due to multiple VCPUs expecting to have their interrupts delivered. Am I getting it wrong?

[1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-August/120247.html

Thanks,
Bogdan P.
��.n��������+%������w��{.n�����o��^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�


[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux