Re: [Alacrityvm-devel] [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/18/2009 08:27 PM, Ira W. Snyder wrote:
In fact, modern x86s do have dma engines these days (google for Intel
I/OAT), and one of our plans for vhost-net is to allow their use for
packets above a certain size.  So a patch allowing vhost-net to
optionally use a dma engine is a good thing.
Yes, I'm aware that very modern x86 PCs have general purpose DMA
engines, even though I don't have any capable hardware. However, I think
it is better to support using any PC (with or without DMA engine, any
architecture) as the PCI master, and just handle the DMA all from the
PCI agent, which is known to have DMA?

Certainly; but if your PCI agent will support the DMA API, then the same vhost code will work with both I/OAT and your specialized hardware.

Exposing a knob to userspace is not an insurmountable problem; vhost-net
already allows changing the memory layout, for example.

Let me explain the most obvious problem I ran into: setting the MAC
addresses used in virtio.

On the host (PCI master), I want eth0 (virtio-net) to get a random MAC
address.

On the guest (PCI agent), I want eth0 (virtio-net) to get a specific MAC
address, aa:bb:cc:dd:ee:ff.

The virtio feature negotiation code handles this, by seeing the
VIRTIO_NET_F_MAC feature in it's configuration space. If BOTH drivers do
not have VIRTIO_NET_F_MAC set, then NEITHER will use the specified MAC
address. This is because the feature negotiation code only accepts a
feature if it is offered by both sides of the connection.

In this case, I must have the guest generate a random MAC address and
have the host put aa:bb:cc:dd:ee:ff into the guest's configuration
space. This basically means hardcoding the MAC addresses in the Linux
drivers, which is a big no-no.

What would I expose to userspace to make this situation manageable?


I think in this case you want one side to be virtio-net (I'm guessing the x86) and the other side vhost-net (the ppc boards with the dma engine). virtio-net on x86 would communicate with userspace on the ppc board to negotiate features and get a mac address, the fast path would be between virtio-net and vhost-net (which would use the dma engine to push and pull data).

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux