Re: [PATCHv4 2/2] kvm: deliver msi interrupts from irq handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 28, 2012 at 05:25:17PM +0200, Michael S. Tsirkin wrote:
> On Wed, Nov 28, 2012 at 03:38:40PM +0200, Gleb Natapov wrote:
> > On Wed, Nov 28, 2012 at 03:25:44PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Nov 28, 2012 at 02:45:09PM +0200, Gleb Natapov wrote:
> > > > On Wed, Nov 28, 2012 at 02:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > On Wed, Nov 28, 2012 at 02:13:01PM +0200, Gleb Natapov wrote:
> > > > > > On Wed, Nov 28, 2012 at 01:56:16PM +0200, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Nov 28, 2012 at 01:43:34PM +0200, Gleb Natapov wrote:
> > > > > > > > On Wed, Oct 17, 2012 at 06:06:06PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > > We can deliver certain interrupts, notably MSI,
> > > > > > > > > from atomic context.  Use kvm_set_irq_inatomic,
> > > > > > > > > to implement an irq handler for msi.
> > > > > > > > > 
> > > > > > > > > This reduces the pressure on scheduler in case
> > > > > > > > > where host and guest irq share a host cpu.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
> > > > > > > > > ---
> > > > > > > > >  virt/kvm/assigned-dev.c | 36 ++++++++++++++++++++++++++----------
> > > > > > > > >  1 file changed, 26 insertions(+), 10 deletions(-)
> > > > > > > > > 
> > > > > > > > > diff --git a/virt/kvm/assigned-dev.c b/virt/kvm/assigned-dev.c
> > > > > > > > > index 23a41a9..3642239 100644
> > > > > > > > > --- a/virt/kvm/assigned-dev.c
> > > > > > > > > +++ b/virt/kvm/assigned-dev.c
> > > > > > > > > @@ -105,6 +105,15 @@ static irqreturn_t kvm_assigned_dev_thread_intx(int irq, void *dev_id)
> > > > > > > > >  }
> > > > > > > > >  
> > > > > > > > >  #ifdef __KVM_HAVE_MSI
> > > > > > > > > +static irqreturn_t kvm_assigned_dev_msi(int irq, void *dev_id)
> > > > > > > > > +{
> > > > > > > > > +	struct kvm_assigned_dev_kernel *assigned_dev = dev_id;
> > > > > > > > > +	int ret = kvm_set_irq_inatomic(assigned_dev->kvm,
> > > > > > > > > +				       assigned_dev->irq_source_id,
> > > > > > > > > +				       assigned_dev->guest_irq, 1);
> > > > > > > > Why not use kvm_set_msi_inatomic() and drop kvm_set_irq_inatomic() from
> > > > > > > > previous patch? 
> > > > > > > 
> > > > > > > kvm_set_msi_inatomic needs a routing entry, and
> > > > > > > we don't have the routing entry at this level.
> > > > > > > 
> > > > > > Yes, right. BTW is this interface will be used only for legacy assigned
> > > > > > device or there will be other users too?
> > > > > 
> > > > > I think long term we should convert irqfd to this too.
> > > > > 
> > > > VIFO uses irqfd, no? So, why legacy device assignment needs that code
> > > > to achieve parity with VFIO?
> > > 
> > > Clarification: there are two issues:
> > > 
> > > 1. legacy assignment has bad worst case latency
> > > 	this is because we bounce all ainterrupts through threads
> > > 	this patch fixes this
> > > 2. irqfd injects all MSIs from an atomic context
> > > 	this patch does not fix this, but it can
> > > 	be fixed on top of this patch
> > > 
> > Thanks for clarification.
> > 
> > > > Also why long term? What are the complications?
> > > 
> > > Nothing special. Just need to be careful with all the rcu trickery that
> > > irqfd uses.
> > > 
> > > > > > > Further, guest irq might not be an MSI: host MSI
> > > > > > > can cause guest intx injection I think, we need to
> > > > > > > bounce it to thread as we did earlier.
> > > > > > Ah, so msi in kvm_assigned_dev_msi() is about host msi?
> > > > > 
> > > > > Yes.
> > > > > 
> > > > > > Can host be intx
> > > > > > but guest msi?
> > > > > 
> > > > > No.
> > > > > 
> > > > > > You seems to not handle this case. Also injection of intx
> > > > > > via ioapic is the same as injecting MSI. The format and the capability
> > > > > > of irq message are essentially the same.
> > > > > 
> > > > > Absolutely. So we will be able to extend this to intx long term.
> > > > > The difference is in the fact that unlike msi, intx can
> > > > > (and does) have multiple entries per GSI.
> > > > > I have not yet figured out how to report and handle failure
> > > > > in case one of these can be injected in atomic context,
> > > > > another can't. There's likely an easy way but can
> > > > > be a follow up patch I think.
> > > >
> > > > I prefer to figure that out before introducing the interface.
> > > 
> > > Ow come on, it's just an internal interface, not even exported
> > > to modules. Changing it would be trivial and the
> > > implementation is very small too.
> > > 
> > The question is if it can be done at all or not. If it cannot then it
> > does not matter that interface is internal, but fortunately looks like
> > it is possible, so I am fine with proposed implementation for now.
> > 
> > > > Hmm, we
> > > > can get rid of vcpu loop in pic (should be very easily done by checking
> > > > for kvm_apic_accept_pic_intr() during apic configuration and keeping
> > > > global extint vcpu) and then sorting irq routing entries so that ioapic
> > > > entry is first since only ioapic injection can fail.
> > > 
> > > Yes, I think it's a good idea to remove as many vcpu loops as possible:
> > > for example, this vcpu loop is currently hit from atomic
> > > context anyway, isn't it?
> > Actually it is not. The lock is dropped just before the loop, so this
> > loop shouldn't be the roadblock at all.
> 
> Hmm you are saying PIC injections in atomic context always succeeds?
> 
No, I am saying vcpu loop is not hit from atomic context currently.

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux