On Wed, Nov 30, 2016 at 08:49:24AM +0000, Liu, Yi L wrote: > What's changed from v2: > a) Detailed feature description > b) refine description in "Address translation in virtual SVM" > b) "Terms" is added > > Content > =============================================== > 1. Feature description > 2. Why use it? > 3. How to enable it > 4. How to test > 5. Terms > > Details > =============================================== > 1. Feature description > Shared virtual memory(SVM) is to let application program share its virtual > address with SVM capable devices. > > Shared virtual memory details: > a) SVM feature requires ATS/PRQ/PASID support on both device side and > IOMMU side. > b) SVM capable devices could send DMA requests with PASID, the address > in the request would be a virtual address within a program's virtual address > space. > c) IOMMU would use first level page table to translate the address in the > request. > d) First level page table is a HVA->HPA mapping on bare metal. > > Shared Virtual Memory feature in pass-through scenarios is actually SVM > virtualization. It is to let application programs(running in guest)share their > virtual address with assigned device(e.g. graphics processors or accelerators). I think I am missing something obvious, but the current way that DRM works is that the kernel sets up its VA addresses for the GPU and it uses that for its ring. It also setups an user level mapping for the GPU if the application (Xorg) really wants it - but most of the time the kernel is in charge of poking at the ring, and the memory that is shared with the Xorg is normal RAM allocated via alloc_pages (see drivers/gpu/drm/ttm/ttm_page_alloc_dma.c and drivers/gpu/drm/ttm/ttm_page_alloc.c). So are talking about the guest applications having access to the ring of the GPU?