Re: drm: Why shmem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Noralf Trønnes <noralf@xxxxxxxxxxx> writes:

> Den 15.09.2017 02.45, skrev Eric Anholt:
>> Noralf Trønnes <noralf@xxxxxxxxxxx> writes:
>>
>>> Den 30.08.2017 09.40, skrev Daniel Vetter:
>>>> On Tue, Aug 29, 2017 at 10:40:04AM -0700, Eric Anholt wrote:
>>>>> Daniel Vetter <daniel@xxxxxxxx> writes:
>>>>>
>>>>>> On Mon, Aug 28, 2017 at 8:44 PM, Noralf Trønnes <noralf@xxxxxxxxxxx> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Currently I'm using the cma library with tinydrm because it was so
>>>>>>> simple to use even though I have to work around the fact that reads are
>>>>>>> uncached. A bigger problem that I have become aware of, is that it
>>>>>>> restricts the dma buffers it can import since they have to be continous.
>>>>>>>
>>>>>>> So I looked to udl and it uses shmem. Fine, let's make a shmem gem
>>>>>>> library similar to the cma library.
>>>>>>>
>>>>>>> Now I have done so and have started to think about the DOC: section,
>>>>>>> explaining what the library does. And I'm stuck, what's the benefit of
>>>>>>> using shmem compared to just using alloc_page()?
>>>>>> Gives you swapping (and eventually maybe even migration) since there's
>>>>>> a real filesystem behind it. Atm this only works if you register a
>>>>>> shrinker callback, which for display drivers is a bit overkill. See
>>>>>> i915 or msm for examples (or ttm, if you want an entire fancy
>>>>>> framework), and git grep shrinker -- drivers/gpu.
>>>>> The shrinker is only needed if you need some impetus to unbind objects
>>>>> from your page tables, right?  If you're just binding the pages for the
>>>>> moment that you're doing SPI transfers to the display, then in the
>>>>> remaining time it could be swapped out, right?
>>>> Yup, and for SPI the setup overhead shouldn't matter. But everyone else
>>>> probably wants to cache mappings and page lists, and that means some kind
>>>> of shrinker to drop them when needed.
>>> Let me see if I've understood this correctly:
>>>
>>> The first time I call drm_gem_get_pages() the buffer pages are
>>> allocated and pinned.
>>> When I then call drm_gem_put_pages() the pages are unpinned, but not freed.
>>> The kernel is now free to swap out the pages if necessary.
>>> Calling drm_gem_get_pages() a second time will swapin the pages if
>>> necessary and pin them.
>>>
>>> If this is correct, where are pages freed?
>> drm_gem_object_release() during freeing of the object.
>>
>
> I see that you get the pages in vc5_bo_create() and put them in
> vc5_free_object(). This means that you don't benefit from the shmem
> "advantage" of swapping.
> Why do you use shmem? Simplicity since it's built into DRM?

I *just* started writing this driver.  I'm not unpinning objects under
memory pressure ye.t

Attachment: signature.asc
Description: PGP signature

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux