.snip.. > > > Shared Virtual Memory feature in pass-through scenarios is actually SVM > > > virtualization. It is to let application programs(running in guest)share their > > > virtual address with assigned device(e.g. graphics processors or accelerators). > > > > I think I am missing something obvious, but the current way that DRM > > works is that the kernel sets up its VA addresses for the GPU and it uses > > that for its ring. It also setups an user level mapping for the GPU if the > > application (Xorg) really wants it - but most of the time the kernel is > > in charge of poking at the ring, and the memory that is shared with the > > Xorg is normal RAM allocated via alloc_pages (see > > drivers/gpu/drm/ttm/ttm_page_alloc_dma.c > > and drivers/gpu/drm/ttm/ttm_page_alloc.c). > > > > So are talking about the guest applications having access to the > > ring of the GPU? > > No. SVM is purely about sharing CPU address space with device. Command > submission is still through kernel driver which controls rings (with SVM then > you can put VA into those commands). There are other vendor specific > features to enable direct user space submission which is orthogonal to SVM. Apologies for my ignorance but how is this beneficial? As in currently you would put in bus addresses on the ring, but now you can put VA addresses. The obvious benefit I see is that you omit the DMA ops which means there is less of 'lookup' (VA->bus address) in software - but I would have thought this would be negligible performance impact? And now the IOMMU alongside with the CPU would do this lookup. Or are there some other improvements in this? > > Thanks > Kevin