Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just FYI, one issue that I found with exposing host memory regions as
a PCI BAR (including via a very old version of the ivshmem driver...
haven't tried a newer one) is that x86's pci_mmap_page_range doesn't
want to set up a write-back cacheable mapping of a BAR.

It may not matter for your requirements, but the uncached access
reduced guest<->host bandwidth via the shared memory driver by a lot.


If you need the physical address to be fixed, you might be better off
by reserving a memory region in the e820 map rather than a PCI BAR,
since BARs can move around.


On Thu, Aug 25, 2011 at 8:08 AM, David Evensky
<evensky@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> Adding in the rest of what ivshmem does shouldn't affect our use, *I
> think*.  I hadn't intended this to do everything that ivshmem does,
> but I can see how that would be useful. It would be cool if it could
> grow into that.
>
> Our requirements for the driver in kvm tool are that another program
> on the host can create a shared segment (anonymous, non-file backed)
> with a specified handle, size, and contents. That this segment is
> available to the guest at boot time at a specified address and that no
> driver will change the contents of the memory except under direct user
> action. Also, when the guest goes away the shared memory segment
> shouldn't be affected (e.g. contents changed). Finally, we cannot
> change the lightweight nature of kvm tool.
>
> This is the feature of ivshmem that I need to check today. I did some
> testing a month ago, but it wasn't detailed enough to check this out.
>
> \dae
>
>
>
>
> On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote:
> > On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
> > > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg <penberg@xxxxxxxxxx> wrote:
> > > > Hi Stefan,
> > > >
> > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote:
> > > >>> It's obviously not competing. One thing you might want to consider is
> > > >>> making the guest interface compatible with ivshmem. Is there any reason
> > > >>> we shouldn't do that? I don't consider that a requirement, just nice to
> > > >>> have.
> > > >>
> > > >> The point of implementing the same interface as ivshmem is that users
> > > >> don't need to rejig guests or applications in order to switch between
> > > >> hypervisors.  A different interface also prevents same-to-same
> > > >> benchmarks.
> > > >>
> > > >> There is little benefit to creating another virtual device interface
> > > >> when a perfectly good one already exists.  The question should be: how
> > > >> is this shmem device different and better than ivshmem?  If there is
> > > >> no justification then implement the ivshmem interface.
> > > >
> > > > So which interface are we actually taking about? Userspace/kernel in the
> > > > guest or hypervisor/guest kernel?
> > >
> > > The hardware interface.  Same PCI BAR layout and semantics.
> > >
> > > > Either way, while it would be nice to share the interface but it's not a
> > > > *requirement* for tools/kvm unless ivshmem is specified in the virtio
> > > > spec or the driver is in mainline Linux. We don't intend to require people
> > > > to implement non-standard and non-Linux QEMU interfaces. OTOH,
> > > > ivshmem would make the PCI ID problem go away.
> > >
> > > Introducing yet another non-standard and non-Linux interface doesn't
> > > help though.  If there is no significant improvement over ivshmem then
> > > it makes sense to let ivshmem gain critical mass and more users
> > > instead of fragmenting the space.
> >
> > I support doing it ivshmem-compatible, though it doesn't have to be a
> > requirement right now (that is, use this patch as a base and build it
> > towards ivshmem - which shouldn't be an issue since this patch provides
> > the PCI+SHM parts which are required by ivshmem anyway).
> >
> > ivshmem is a good, documented, stable interface backed by a lot of
> > research and testing behind it. Looking at the spec it's obvious that
> > Cam had KVM in mind when designing it and thats exactly what we want to
> > have in the KVM tool.
> >
> > David, did you have any plans to extend it to become ivshmem-compatible?
> > If not, would turning it into such break any code that depends on it
> > horribly?
> >
> > --
> >
> > Sasha.
> >
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux