Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've tested ivshmem with the latest git pull (had minor trouble
building on debian sid, vnc and unused var, but trivial to work
around).

QEMU's  -device ivshmem,size=16,shm=/kvm_shmem

seems to function as my proposed

        --shmem pci:0xfd000000:16M:handle=/kvm_shmem

except that I can't specify the BAR. I am able to read what
I'm given, 0xfd000000, from lspci -vvv; but for our application
we need to be able to specify the address on the command line.

If folks are open, I would like to request this feature in the
ivshmem. It would be cool to test our application with QEMU,
even if we can't use it in production.

I didn't check the case where QEMU must create the shared
segment from scratch, etc. so I didn't test what differences
there are with my proposed 'create' flag or not, but I did look
at the ivshmem source and looks like it does the right thing.
(Makes me want to steal code to make mine better :-))


\dae

On Thu, Aug 25, 2011 at 08:08:06AM -0700, David Evensky wrote:
> 
> Adding in the rest of what ivshmem does shouldn't affect our use, *I
> think*.  I hadn't intended this to do everything that ivshmem does,
> but I can see how that would be useful. It would be cool if it could
> grow into that.
> 
> Our requirements for the driver in kvm tool are that another program
> on the host can create a shared segment (anonymous, non-file backed)
> with a specified handle, size, and contents. That this segment is
> available to the guest at boot time at a specified address and that no
> driver will change the contents of the memory except under direct user
> action. Also, when the guest goes away the shared memory segment
> shouldn't be affected (e.g. contents changed). Finally, we cannot
> change the lightweight nature of kvm tool.
> 
> This is the feature of ivshmem that I need to check today. I did some
> testing a month ago, but it wasn't detailed enough to check this out.
> 
> \dae
> 
> 
> 
> 
> On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote:
> > On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
> > > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg <penberg@xxxxxxxxxx> wrote:
> > > > Hi Stefan,
> > > >
> > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote:
> > > >>> It's obviously not competing. One thing you might want to consider is
> > > >>> making the guest interface compatible with ivshmem. Is there any reason
> > > >>> we shouldn't do that? I don't consider that a requirement, just nice to
> > > >>> have.
> > > >>
> > > >> The point of implementing the same interface as ivshmem is that users
> > > >> don't need to rejig guests or applications in order to switch between
> > > >> hypervisors.  A different interface also prevents same-to-same
> > > >> benchmarks.
> > > >>
> > > >> There is little benefit to creating another virtual device interface
> > > >> when a perfectly good one already exists.  The question should be: how
> > > >> is this shmem device different and better than ivshmem?  If there is
> > > >> no justification then implement the ivshmem interface.
> > > >
> > > > So which interface are we actually taking about? Userspace/kernel in the
> > > > guest or hypervisor/guest kernel?
> > > 
> > > The hardware interface.  Same PCI BAR layout and semantics.
> > > 
> > > > Either way, while it would be nice to share the interface but it's not a
> > > > *requirement* for tools/kvm unless ivshmem is specified in the virtio
> > > > spec or the driver is in mainline Linux. We don't intend to require people
> > > > to implement non-standard and non-Linux QEMU interfaces. OTOH,
> > > > ivshmem would make the PCI ID problem go away.
> > > 
> > > Introducing yet another non-standard and non-Linux interface doesn't
> > > help though.  If there is no significant improvement over ivshmem then
> > > it makes sense to let ivshmem gain critical mass and more users
> > > instead of fragmenting the space.
> > 
> > I support doing it ivshmem-compatible, though it doesn't have to be a
> > requirement right now (that is, use this patch as a base and build it
> > towards ivshmem - which shouldn't be an issue since this patch provides
> > the PCI+SHM parts which are required by ivshmem anyway).
> > 
> > ivshmem is a good, documented, stable interface backed by a lot of
> > research and testing behind it. Looking at the spec it's obvious that
> > Cam had KVM in mind when designing it and thats exactly what we want to
> > have in the KVM tool.
> > 
> > David, did you have any plans to extend it to become ivshmem-compatible?
> > If not, would turning it into such break any code that depends on it
> > horribly?
> > 
> > -- 
> > 
> > Sasha.
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux