On 09/15/2009 04:03 PM, Gregory Haskins wrote:
In this case the x86 is the owner and the ppc boards use translated
access. Just switch drivers and device and it falls into place.
You could switch vbus roles as well, I suppose.
Right, there's not real difference in this regard.
Another potential
option is that he can stop mapping host memory on the guest so that it
follows the more traditional model. As a bus-master device, the ppc
boards should have access to any host memory at least in the GFP_DMA
range, which would include all relevant pointers here.
I digress: I was primarily addressing the concern that Ira would need
to manage the "host" side of the link using hvas mapped from userspace
(even if host side is the ppc boards). vbus abstracts that access so as
to allow something other than userspace/hva mappings. OTOH, having each
ppc board run a userspace app to do the mapping on its behalf and feed
it to vhost is probably not a huge deal either. Where vhost might
really fall apart is when any assumptions about pageable memory occur,
if any.
Why? vhost will call get_user_pages() or copy_*_user() which ought to
do the right thing.
As an aside: a bigger issue is that, iiuc, Ira wants more than a single
ethernet channel in his design (multiple ethernets, consoles, etc). A
vhost solution in this environment is incomplete.
Why? Instantiate as many vhost-nets as needed.
Note that Ira's architecture highlights that vbus's explicit management
interface is more valuable here than it is in KVM, since KVM already has
its own management interface via QEMU.
vhost-net and vbus both need management, vhost-net via ioctls and vbus
via configfs. The only difference is the implementation. vhost-net
leaves much more to userspace, that's the main difference.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html