On Wed, Aug 19, 2009 at 01:06:45AM +0300, Avi Kivity wrote: > On 08/19/2009 12:26 AM, Avi Kivity wrote: >>> >>> Off the top of my head, I would think that transporting userspace >>> addresses in the ring (for copy_(to|from)_user()) vs. physical addresses >>> (for DMAEngine) might be a problem. Pinning userspace pages into memory >>> for DMA is a bit of a pain, though it is possible. >> >> >> Oh, the ring doesn't transport userspace addresses. It transports >> guest addresses, and it's up to vhost to do something with them. >> >> Currently vhost supports two translation modes: >> >> 1. virtio address == host virtual address (using copy_to_user) >> 2. virtio address == offsetted host virtual address (using copy_to_user) >> >> The latter mode is used for kvm guests (with multiple offsets, >> skipping some details). >> >> I think you need to add a third mode, virtio address == host physical >> address (using dma engine). Once you do that, and wire up the >> signalling, things should work. > > > You don't need in fact a third mode. You can mmap the x86 address space > into your ppc userspace and use the second mode. All you need then is > the dma engine glue and byte swapping. > Hmm, I'll have to think about that. The ppc is a 32-bit processor, so it has 4GB of address space for everything, including PCI, SDRAM, flash memory, and all other peripherals. This is exactly like 32bit x86, where you cannot have a PCI card that exposes a 4GB PCI BAR. The system would have no address space left for its own SDRAM. On my x86 computers, I only have 1GB of physical RAM, and so the ppc's have plenty of room in their address spaces to map the entire x86 RAM into their own address space. That is exactly what I do now. Accesses to ppc physical address 0x80000000 "magically" hit x86 physical address 0x0. Ira -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html