On 18.05.22 19:39, Arnd Bergmann wrote:
Hello Arnd
On Wed, May 18, 2022 at 5:06 PM Oleksandr <olekstysh@xxxxxxxxx> wrote:
On 18.05.22 17:32, Arnd Bergmann wrote:
On Sat, May 7, 2022 at 7:19 PM Oleksandr Tyshchenko <olekstysh@xxxxxxxxx> wrote:
This would mean having a device
node for the grant-table mechanism that can be referred to using the 'iommus'
phandle property, with the domid as an additional argument.
I assume, you are speaking about something like the following?
xen_dummy_iommu {
compatible = "xen,dummy-iommu";
#iommu-cells = <1>;
};
virtio@3000 {
compatible = "virtio,mmio";
reg = <0x3000 0x100>;
interrupts = <41>;
/* The device is located in Xen domain with ID 1 */
iommus = <&xen_dummy_iommu 1>;
};
Right, that's that's the idea,
thank you for the confirmation
except I would not call it a 'dummy'.
From the perspective of the DT, this behaves just like an IOMMU,
even if the exact mechanism is different from most hardware IOMMU
implementations.
well, agree
It does not quite fit the model that Linux currently uses for iommus,
as that has an allocator for dma_addr_t space
yes (# 3/7 adds grant-table based allocator)
, but it would think it's
conceptually close enough that it makes sense for the binding.
Interesting idea. I am wondering, do we need an extra actions for this
to work in Linux guest (dummy IOMMU driver, etc)?
It depends on how closely the guest implementation can be made to
resemble a normal iommu. If you do allocate dma_addr_t addresses,
it may actually be close enough that you can just turn the grant-table
code into a normal iommu driver and change nothing else.
Unfortunately, I failed to find a way how use grant references at the
iommu_ops level (I mean to fully pretend that we are an IOMMU driver). I
am not too familiar with that, so what is written below might be wrong
or at least not precise.
The normal IOMMU driver in Linux doesn’t allocate DMA addresses by
itself, it just maps (IOVA-PA) what was requested to be mapped by the
upper layer. The DMA address allocation is done by the upper layer
(DMA-IOMMU which is the glue layer between DMA API and IOMMU API
allocates IOVA for PA?). But, all what we need here is just to allocate
our specific grant-table based DMA addresses (DMA address = grant
reference + offset in the page), so let’s say we need an entity to take
a physical address as parameter and return a DMA address (what actually
commit #3/7 is doing), and that’s all. So working at the dma_ops layer
we get exactly what we need, with the minimal changes to guest
infrastructure. In our case the Xen itself acts as an IOMMU.
Assuming that we want to reuse the IOMMU infrastructure somehow for our
needs. I think, in that case we will likely need to introduce a new
specific IOVA allocator (alongside with a generic one) to be hooked up
by the DMA-IOMMU layer if we run on top of Xen. But, even having the
specific IOVA allocator to return what we indeed need (DMA address =
grant reference + offset in the page) we will still need the specific
minimal required IOMMU driver to be present in the system anyway in
order to track the mappings(?) and do nothing with them, returning a
success (this specific IOMMU driver should have all mandatory callbacks
implemented).
I completely agree, it would be really nice to reuse generic IOMMU
bindings rather than introducing Xen specific property if what we are
trying to implement in current patch series fits in the usage of
"iommus" in Linux more-less. But, if we will have to add more
complexity/more components to the code for the sake of reusing device
tree binding, this raises a question whether that’s worthwhile.
Or I really missed something?
Arnd
--
Regards,
Oleksandr Tyshchenko