Re: Mask bit support's API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 23 November 2010 22:06:20 Avi Kivity wrote:
> On 11/23/2010 03:57 PM, Yang, Sheng wrote:
> > >  >  Yeah, but won't be included in this patchset.
> > >  
> > >  What API changes are needed?  I'd like to see the complete API.
> > 
> > I am not sure about it. But I suppose the structure should be the same?
> > In fact it's pretty hard for me to image what's needed for virtio in the
> > future, especially there is no such code now. I really prefer to deal
> > with assigned device and virtio separately, which would make the work
> > much easier. But seems you won't agree on that.
> 
> First, I don't really see why the two cases are different (but I don't
> do a lot in this space).  Surely between you and Michael, you have all
> the information?
> 
> Second, my worry is a huge number of ABI variants that come from
> incrementally adding features.  I want to implement bigger chunks of
> functionality.  So I'd like to see all potential users addressed, at
> least from the ABI point of view if not the implementation.
> 
> > >  The API needs to be compatible with the pending bit, even if we don't
> > >  implement it now.  I want to reduce the rate of API changes.
> > 
> > This can be implemented by this API, just adding a flag for it. And I
> > would still take this into consideration in the next API purposal.
> 
> Shouldn't kvm also service reads from the pending bitmask?

Of course KVM should service reading from pending bitmask. For assigned device, 
it's kernel who would set the pending bit; but I am not sure for virtio. This 
interface is GET_ENTRY, so reading is fine with it.
 
> > >  So instead of
> > >  
> > >  - guest reads/writes msix
> > >  - kvm filters mmio, implements some, passes others to userspace
> > >  
> > >  we have
> > >  
> > >  - guest reads/writes msix
> > >  - kvm implements all
> > >  - some writes generate an additional notification to userspace
> > 
> > I suppose we don't need to generate notification to userspace? Because
> > every read/write is handled by kernel, and userspace just need interface
> > to kernel to get/set the entry - and well, does userspace need to do it
> > when kernel can handle all of them? Maybe not...
> 
> We could have the kernel handle addr/data writes by setting up an
> internal interrupt routing.  A disadvantage is that more work is needed
> if we emulator interrupt remapping in qemu.

In fact modifying irq routing in the kernel is also the thing I want to avoid.

So, the flow would be:

kernel get MMIO write, record it in it's own MSI table
KVM exit to QEmu, by one specific exit reason
QEmu know it have to sync the MSI table, then reading the entries from kernel
QEmu found it's an write, so it need to reprogram irq routing table using the 
entries above
done

But wait, why should qemu read entries from kernel? By default exit we already 
have the information about what's the entry to modify and what to write, so we can 
use them directly. By this way, we also don't need an specific exit reason - just 
exit to qemu in normal way is fine.

Then it would be:

kernel get MMIO write, record it in it's own MSI table
KVM exit to QEmu, indicate MMIO exit
QEmu found it's an write, it would update it's own MSI table(may need to query 
mask bit from kernel), and reprogram irq routing table using the entries above
done

Then why should kernel kept it's own MSI table? I think the only reason is we can 
speed up reading in that way - but the reading we want to speed up is mostly on 
enabled entry(the first entry), which is already in the IRQ routing table... 

And for enabled/disabled entry, you can see it like this: for the entries inside 
routing table, we think it's enabled; otherwise it's disabled. Then you don't need 
to bothered by pci_enable_msix().

So our strategy for reading accelerating can be:

If the entry contained in irq routing table, then use it; otherwise let qemu deal 
with it. Because it's the QEmu who owned irq routing table, the synchronization is 
guaranteed. We don't need the MSI table in the kernel then.

And for writing, we just want to cover all of mask bit, but none of others.

I think the concept here is more acceptable?

The issue here is MSI table and irq routing table got duplicate information on 
some entries. My initial purposal is to use irq routing table in kernel, then we 
don't need to duplicate information.


--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux