Re: [PATCH v3 0/2] drm: Add shmem GEM library

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/14/2018 06:51 PM, Noralf Trønnes wrote:

Den 14.09.2018 18.13, skrev Daniel Vetter:
On Fri, Sep 14, 2018 at 5:48 PM, Thomas Hellstrom <thomas@xxxxxxxxxxxx> wrote:
Hi, Noralf,

On 09/11/2018 02:43 PM, Noralf Trønnes wrote:
This patchset adds a library for shmem backed GEM objects and makes use
of it in tinydrm.

When I made tinydrm I used the CMA helper because it was very easy to
use. July last year I learned that this limits which drivers to PRIME
import from, since CMA requires continuous memory. tinydrm drivers don't
require that. So I set out to change that looking first at shmem, but
that wasn't working since shmem didn't work with fbdev deferred I/O.
Then I did a vmalloc buffer attempt which worked with deferred I/O, but maybe wouldn't be of so much use as a library for other drivers to use. As my work to split out stuff from the CMA helper for shared use came to
an end, I had a generic fbdev emulation that uses a shadow buffer for
deferred I/O.
This means that I can now use shmem buffers after all.

I have looked at the other drivers that use drm_gem_get_pages() and
several supports different cache modes so I've done that even though
tinydrm only uses the cached one.
Out if interest, how can you use writecombine and uncached with shared
memory?
as typically the linear kernel map of the affected pages also needs
changing?
I think on x86 at least the core code takes care of that. On arm, I'm
not sure this just works, since you can't change the mode of the
linear map. Instead you need specially allocated memory which is _not_
in the linear map. I guess arm boxes with pcie slots aren't that
common yet :-)

I was hoping to get some feedback on these cache mode flags.

These drivers use them:
  udl/udl_gem.c
  omapdrm/omap_gem.c
  msm/msm_gem.c
  etnaviv/etnaviv_gem.c

I don't know if they make sense or not, so any help is appreciated.

It's possible, as Daniel says, that the X86 PAT system now automatically tracks all pte inserts and changes the linear kernel map accordingly. It certainly didn't use to do that, so for example TTM does that explicitly. And it's a very slow operation since it involves a global cache- and TLB flush across all cores. So if PAT is doing that on a per-page basis, it's probably bound to be very slow.

The concern with shmem (last time I looked which was a couple of years ago, i admit) was that shmem needs the linear kernel map to copy data in and out when swapping and on hibernate. If the above drivers that you mention don't use shmem, that's all fine, but the combination of shmem and special memory that is NOT mapped in the kernel memory does look a bit odd to me.

/Thomas



Noralf.

-Daniel

Thanks,
Thomas


_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux