Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 25, 2011 at 6:06 AM, Pekka Enberg <penberg@xxxxxxxxxx> wrote:
> On Wed, 2011-08-24 at 21:49 -0700, David Evensky wrote:
>> On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
>> >
>> > On 24.08.2011, at 17:25, David Evensky wrote:
>> >
>> > >
>> > >
>> > > This patch adds a PCI device that provides PCI device memory to the
>> > > guest. This memory in the guest exists as a shared memory segment in
>> > > the host. This is similar memory sharing capability of Nahanni
>> > > (ivshmem) available in QEMU. In this case, the shared memory segment
>> > > is exposed as a PCI BAR only.
>> > >
>> > > A new command line argument is added as:
>> > >    --shmem pci:0xc8000000:16MB:handle=/newmem:create
>> > >
>> > > which will set the PCI BAR at 0xc8000000, the shared memory segment
>> > > and the region pointed to by the BAR will be 16MB. On the host side
>> > > the shm_open handle will be '/newmem', and the kvm tool will create
>> > > the shared segment, set its size, and initialize it. If the size,
>> > > handle, or create flag are absent, they will default to 16MB,
>> > > handle=/kvm_shmem, and create will be false. The address family,
>> > > 'pci:' is also optional as it is the only address family currently
>> > > supported. Only a single --shmem is supported at this time.
>> >
>> > Did you have a look at ivshmem? It does that today, but also gives
>> you an IRQ line so the guests can poke each other. For something as
>> simple as this, I don't see why we'd need two competing
>> implementations.
>>
>> Isn't ivshmem in QEMU? If so, then I don't think there isn't any
>> competition. How do you feel that these are competing?
>
> It's obviously not competing. One thing you might want to consider is
> making the guest interface compatible with ivshmem. Is there any reason
> we shouldn't do that? I don't consider that a requirement, just nice to
> have.

The point of implementing the same interface as ivshmem is that users
don't need to rejig guests or applications in order to switch between
hypervisors.  A different interface also prevents same-to-same
benchmarks.

There is little benefit to creating another virtual device interface
when a perfectly good one already exists.  The question should be: how
is this shmem device different and better than ivshmem?  If there is
no justification then implement the ivshmem interface.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux