Re: [PATCH RFCv1 08/14] iommufd: Add IOMMU_VIOMMU_SET_DEV_ID ioctl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 23, 2024 at 06:42:56AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@xxxxxxxxxx>
> > Sent: Saturday, April 13, 2024 11:47 AM
> >
> > Introduce a new ioctl to set a per-viommu device virtual id that should be
> > linked to the physical device id (or just a struct device pointer).
> >
> > Since a viommu (user space IOMMU instance) can have multiple devices
> 
> this is true...
> 
> > while
> > it's not ideal to confine a device to one single user space IOMMU instance
> > either, these two shouldn't just do a 1:1 mapping. Add two xarrays in
> 
> ...but why would one device link to multiple viommu instances?

That's a suggestion from Jason, IIRC, to avoid limiting a device
to a single viommu, though I can't find out the source at this
moment.

Jason, would you mind shed some light here?

> Or is it referring to Tegra194 as arm-smmu-nvidia.c tries to support?

Not actual. It's an SMMUv2 driver, which is not in our plan for
virtualization at this moment. And that driver is essentially a
different "compatible" string as a unique SMMUv2 implementation.

> btw there is a check in the following code:
> 
> +       if (viommu->iommu_dev != idev->dev->iommu->iommu_dev) {
> +               rc = -EINVAL;
> +               goto out_put_viommu;
> +       }
> 
> I vaguely remember an earlier discussion about multiple vSMMU instances
> following the physical SMMU topology, but don't quite recall the exact
> reason.
> 
> What is the actual technical obstacle prohibiting one to put multiple
> VCMDQ's from different SMMU's into one vIOMMU instance?

Because a VCMDQ passthrough involves a direct mmap of a HW MMIO
page to the guest-level MMIO region. The MMIO page provides the
read/write of queue's head and tail indexes.

With a single pSMMU and a single vSMMU, it's simply 1:1 mapping.

With a multi-pSMMU and a single vSMMU, the single vSMMU will see
one guest-level MMIO region backed by multiple physical pages.
Since we are talking about MMIO, technically it cannot select the
corresponding MMIO page to the device, not to mention we don't
really want VMM to involve, i.e. no VM exist, when using VCMDQ.

So, there must be some kind of multi-instanced carriers to hold
those MMIO pages, by attaching devices behind different pSMMUs to
corresponding carriers. And today we have VIOMMU as the carrier.

One step back, even without VCMDQ feature, a multi-pSMMU setup
will have multiple viommus (with our latest design) being added
to a viommu list of a single vSMMU's. Yet, vSMMU in this case
always traps regular SMMU CMDQ, so it can do viommu selection
or even broadcast (if it has to).

Thanks
Nicolin




[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux