Re: [PATCH 12/13] kvm/powerpc: Accelerate H_PUT_TCE by implementing it in real mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17.05.2011, at 11:11, Benjamin Herrenschmidt wrote:

> On Tue, 2011-05-17 at 10:01 +0200, Alexander Graf wrote:
>> I'm not sure I fully understand how this is supposed to work. If the
>> tables are kept inside the kernel, how does userspace get to know
>> where to DMA to?
> 
> The guest gets a dma range from the device-tree which is the range of
> device-side dma addresses it can use that correspond to the table.
> 
> The guest kernel uses the normal linux iommu space allocator to allocate
> space in that region and uses H_PUT_TCE to populate the corresponding
> table entries.
> 
> This is the same interface that is used for "real" iommu's with PCI
> devices btw.

I'm still slightly puzzled here :). IIUC the main point of an IOMMU is for the kernel to change where device accesses actually go to. So device DMAs address A, goes through the IOMMU, in reality accesses address B.

Now, how do we tell the devices implemented in qemu that they're supposed to DMA to address B instead of A if the mapping table is kept in-kernel?


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux