Re: GEM allocation for para-virtualized DRM driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/18/2017 04:06 PM, Rob Clark wrote:
On Sat, Mar 18, 2017 at 9:25 AM, Oleksandr Andrushchenko
<andr2000@xxxxxxxxx> wrote:
Hi, Rob

On 03/18/2017 02:22 PM, Rob Clark wrote:
On Fri, Mar 17, 2017 at 1:39 PM, Oleksandr Andrushchenko
<andr2000@xxxxxxxxx> wrote:
Hello,
I am writing a para-virtualized DRM driver for Xen hypervisor
and it now works with DRM CMA helpers, but I would also like
to make it work with non-contigous memory: virtual machine
that the driver runs in can't guarantee that CMA is actually
physically contigous (that is not a problem because of IPMMU
and other means, the only constraint I have is that I cannot mmap
with pgprot == noncached). So, I am planning to use *drm_gem_get_pages* +
*shmem_read_mapping_page_gfp* to allocate memory for GEM objects
(scanout buffers + dma-bufs shared with virtual GPU)

Do you think this is the right approach to take?
I guess if you had some case where you needed to "migrate" buffers
between host and guest memory,
yes, this is the case. but, I can "map" buffers between host and guests
if you need to physically copy (transfer), like a discreet gpu with
vram, then TTM makes sense.  If you can map the pages directly into
the guest then TTM is probably overkill.
We have zero copy from guest to host/HW, this is why I'm not considering TTM
   then TTM might be useful.
I was looking into it, but it seems to be an overkill in my case
And isn't it that GEM should be used for new drivers, not TTM?
Not really, it's just that (other than amdgpu which uses TTM) all of
the newer drivers have been unified memory.
Good to know, thank you
   A driver for a new GPU
that had vram of some sort should still use TTM.
our virtual GPU support is done on hypervisor level, so no changes to
existing GPU drivers. So, the only thing to care about is that the
buffers our DRM driver provides can be imported and used by that GPU
(there are other issues related to memory, e.g. if real GPU/firware can
see the memory of the guest, but this is another story)
BR,
-R

    Otherwise
this sounds like the right approach.
Thank you. Actually, I am playing with alloc_pages + remap_pfn_range now,
but what DRM provides (_get_pages + shmem_read) seem to be more portable
and generic. So, I'll probably stick to it
BR,
-R
Thank you for helping,
Oleksandr Andrushchenko
Ok, then I'll drop my alloc_pages + remap_pfn_range in favor of
drm_gem_get_pages + shmem_read_mapping_page_gfp

Thank you
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux