On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg <penberg@xxxxxxxxxx> wrote: > Hi Stefan, > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote: >>> It's obviously not competing. One thing you might want to consider is >>> making the guest interface compatible with ivshmem. Is there any reason >>> we shouldn't do that? I don't consider that a requirement, just nice to >>> have. >> >> The point of implementing the same interface as ivshmem is that users >> don't need to rejig guests or applications in order to switch between >> hypervisors. A different interface also prevents same-to-same >> benchmarks. >> >> There is little benefit to creating another virtual device interface >> when a perfectly good one already exists. The question should be: how >> is this shmem device different and better than ivshmem? If there is >> no justification then implement the ivshmem interface. > > So which interface are we actually taking about? Userspace/kernel in the > guest or hypervisor/guest kernel? The hardware interface. Same PCI BAR layout and semantics. > Either way, while it would be nice to share the interface but it's not a > *requirement* for tools/kvm unless ivshmem is specified in the virtio > spec or the driver is in mainline Linux. We don't intend to require people > to implement non-standard and non-Linux QEMU interfaces. OTOH, > ivshmem would make the PCI ID problem go away. Introducing yet another non-standard and non-Linux interface doesn't help though. If there is no significant improvement over ivshmem then it makes sense to let ivshmem gain critical mass and more users instead of fragmenting the space. Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html