RE: [RFC 2/3] virtio-iommu: device probing and operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@xxxxxxx]
> Sent: Monday, August 21, 2017 8:00 PM
> 
> On 21/08/17 08:59, Tian, Kevin wrote:
> >> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@xxxxxxx]
> >> Sent: Monday, April 24, 2017 11:06 PM
> >>>>>>   1. Attach device
> >>>>>>   ----------------
> >>>>>>
> >>>>>> struct virtio_iommu_req_attach {
> >>>>>> 	le32	address_space;
> >>>>>> 	le32	device;
> >>>>>> 	le32	flags/reserved;
> >>>>>> };
> >>>>>>
> >>>>>> Attach a device to an address space. 'address_space' is an identifier
> >>>>>> unique to the guest. If the address space doesn't exist in the
> IOMMU
> >>>>>
> >>>>> Based on your description this address space ID is per operation
> right?
> >>>>> MAP/UNMAP and page-table sharing should have different ID
> spaces...
> >>>>
> >>>> I think it's simpler if we keep a single IOASID space per virtio-iommu
> >>>> device, because the maximum number of address spaces (described
> by
> >>>> ioasid_bits) might be a restriction of the pIOMMU. For page-table
> >> sharing
> >>>> you still need to define which devices will share a page directory using
> >>>> ATTACH requests, though that interface is not set in stone.
> >>>
> >>> got you. yes VM is supposed to consume less IOASIDs than physically
> >>> available. It doesn’t hurt to have one IOASID space for both IOVA
> >>> map/unmap usages (one IOASID per device) and SVM usages (multiple
> >>> IOASIDs per device). The former is digested by software and the latter
> >>> will be bound to hardware.
> >>>
> >>
> >> Hmm, I'm using address space indexed by IOASID for "classic" IOMMU,
> and
> >> then contexts indexed by PASID when talking about SVM. So in my mind
> an
> >> address space can have multiple sub-address-spaces (contexts). Number
> of
> >> IOASIDs is a limitation of the pIOMMU, and number of PASIDs is a
> >> limitation of the device. Therefore attaching devices to address spaces
> >> would update the number of available contexts in that address space.
> The
> >> terminology is not ideal, and I'd be happy to change it for something
> more
> >> clear.
> >>
> >
> > (sorry to pick up this old thread, as the .tex one is not good for review
> > and this thread provides necessary background for IOASID).
> >
> > Hi, Jean,
> >
> > I'd like to hear more clarification regarding the relationship between
> > IOASID and PASID. When reading back above explanation, it looks
> > confusing to me now (though I might get the meaning months ago :/).
> > At least Intel VT-d only understands PASID (or you can think IOASID
> > =PASID). There is no such layered address space concept. Then for
> > map/unmap type request, do you intend to steal some PASIDs for
> > that purpose on such architecture (since IOASID is a mandatory field
> > in map/unmap request)?
> 
> IOASID is a logical ID, it isn't used by hardware. The address space
> concept in virtio-iommu allows to group endpoints together, so that they
> have the same address space. I thought it was pretty much the same as
> "domains" in VT-d? In any case, it is the same as domains in Linux. An
> IOASID provides a handle for communication between virtio-iommu device
> and
> driver, but unlike PASID, the IOASID number doesn't mean anything outside
> of virtio-iommu.

Thanks. It's clear to me then.

btw does it make more sense to use "domain id" instead of "IO address
space id"? For one, when talking about layered address spaces
usually parent address space is a superset of all child address spaces
which doesn't apply to this case, since either anonymous address
space or PASID-tagged address spaces are completely isolated. Instead
'domain' is a more inclusive terminology to embrace multiple address
spaces. For two, 'domain' is better aligned to software terminology (e.g.
iommu_domain) is easier for people to catch up. :-)

> 
> I haven't introduced PASIDs in public virtio-iommu documents yet, but the
> way I intend it, PASID != IOASID. We will still have a logical address
> space identified by IOASID, that can contain multiple contexts identified
> by PASID. At the moment, after the ATTACH request, an address space
> contains a single anonymous context (NO PASID) that can be managed with
> MAP/UNMAP requests. With virtio-iommu v0.4, structures look like the
> following. The NO PASID context is implicit.
> 
>                     address space      context
>     endpoint ----.                                  .- mapping
>     endpoint ----+---- IOASID -------- NO PASID ----+- mapping
>     endpoint ----'                                  '- mapping
> 
> I'd like to add a flag to ATTACH that says "don't create a default
> anonymous context, I'll handle contexts myself". Then a new "ADD_TABLE"
> request to handle contexts. When creating a context, the guest decides if
> it wants to manage it via MAP/UNMAP requests (and a new "context" field),
> or instead manage mappings itself by allocating a page directory and use
> INVALIDATE requests.
> 
>                     address space      context
>     endpoint ----.                                  .- mapping
>     endpoint ----+---- IOASID ----+--- NO PASID ----+- mapping
>     endpoint ----'                |                 '- mapping
>                                   +--- PASID 0  ---- pgd
>                                   |     ...
>                                   '--- PASID N  ---- pgd
> 
> In this example the guest chose to still have an anonymous context that
> uses MAP/UNMAP, along with a few PASID contexts with their own page
> tables.
> 

Above explanation is a good background. Is it useful to include it
in current spec? Though SVM support is not planned now, adding
such background could help build a full story for IOASID concept.

Thanks
Kevin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux