Am Dienstag 19 Mai 2009 20:39:24 schrieb Anthony Liguori: > Perhaps something that maps closer to the current add_buf/get_buf API. > Something like: > > struct iovec *(*map_buf)(struct virtqueue *vq, unsigned int *out_num, > unsigned int *in_num); > void (*unmap_buf)(struct virtqueue *vq, struct iovec *iov, unsigned int > out_num, unsigned int in_num); > > There's symmetry here which is good. The one bad thing about it is > forces certain memory to be read-only and other memory to be > read-write. I don't see that as a bad thing though. > > I think we'll need an interface like this so support driver domains too > since "backend". To put it another way, in QEMU, map_buf == > virtqueue_pop and unmap_buf == virtqueue_push. You are proposing that the guest should define some guest memory to be used as shared memory (some kind of replacement), right? This is fine, as long as we can _also_ map host memory somewhere else (e.g. after guest memory, above 1TB etc.). I definitely want to be able to have an 64MB guest map an 2GB shared memory zone. (See my other mail about the execute-in-place via DCSS use case). I think we should start to write down some requirements. This will help to get a better understanding of the necessary interface: here are my first ideas: o allow to map host-shared-memory to anyplace that can be addressed via a PFN o allow to map beyond guest storage o allow to replace guest memory o read-only and read/write modes o driver interface should not depend on hardware specific stuff (e.g. prefer generic virtio over PCI) More ideas are welcome. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html