Re: [PATCH v5 4/5] drm: Add library for shmem backed GEM objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Den 29.01.2019 01.19, skrev Eric Anholt:
> Noralf Trønnes <noralf@xxxxxxxxxxx> writes:
> 
>> Den 28.01.2019 21.57, skrev Rob Herring:
>>> On Sun, Dec 2, 2018 at 9:59 AM Noralf Trønnes <noralf@xxxxxxxxxxx> wrote:
>>>>
>>>>
>>>> Den 30.11.2018 00.58, skrev Eric Anholt:
>>>>> Daniel Vetter <daniel@xxxxxxxx> writes:
>>>>>
>>>>>> On Wed, Nov 28, 2018 at 01:52:56PM -0800, Eric Anholt wrote:
>>>>>>> Daniel Vetter <daniel@xxxxxxxx> writes:
>>>>>>>
>>>>>>>> On Tue, Nov 27, 2018 at 12:38:44PM -0800, Eric Anholt wrote:
>>>>>>>>> Daniel Vetter <daniel@xxxxxxxx> writes:
>>>>>>>>>
>>>>>>>>>> On Mon, Nov 26, 2018 at 04:36:21PM -0800, Eric Anholt wrote:
>>>>>>>>>>> Noralf Trønnes <noralf@xxxxxxxxxxx> writes:
>>>>>>>>>>>> +static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +      struct drm_gem_object *obj = vma->vm_private_data;
>>>>>>>>>>>> +      struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>>>>>>>>>>> +
>>>>>>>>>>>> +      drm_gem_shmem_put_pages(shmem);
>>>>>>>>>>>> +      drm_gem_vm_close(vma);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>> +const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>>>>>>>>>>>> +      .fault = drm_gem_shmem_fault,
>>>>>>>>>>>> +      .open = drm_gem_vm_open,
>>>>>>>>>>>> +      .close = drm_gem_shmem_vm_close,
>>>>>>>>>>>> +};
>>>>>>>>>>>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>>>>>>>>>>> I just saw a warning from drm_gem_shmem_put_pages() for
>>>>>>>>>>> !shmem->pages_use_count -- I think drm_gem_vm_open() needs to
>>>>>>>>>>> drm_gem_shmem_get_pages().
>>>>>>>>>> Yeah we need a drm_gem_shmem_vm_open here.
>>>>>>>>> Adding one of those fixed my refcounting issues, so I've sent out a v6
>>>>>>>>> with it.
>>>>>>>> Just realized that I've reviewed this patch already, spotted that vma
>>>>>>>> management issue there too. Plus a pile of other things. From reading that
>>>>>>>> other thread discussion with Noralf concluded with "not yet ready for
>>>>>>>> prime time" unfortunately :-/
>>>>>>> I saw stuff about how it wasn't usable for SPI because SPI does weird
>>>>>>> things with DMA mapping.  Was there something else?
>>>>>> Looking through that mail it was a bunch of comments to improve the
>>>>>> kerneldoc. Plus a note that buffer sharing/mmap is going to be all
>>>>>> incoherent and horrible (but I guess for vkms we don't care that much).
>>>>>> I'm just kinda vary of generic buffer handling that turns out to not be
>>>>>> actually all that useful. We have lots of deadends and kinda-midlayers in
>>>>>> this area already (but this one here definitely smells plenty better than
>>>>>> lots of older ones).
>>>>> FWIW, I really want shmem helpers for v3d.  The fault handling in
>>>>> particular has magic I don't understand, and this is not my first fault
>>>>> handler. :/
>>>>
>>>>
>>>> If you can use it for a "real" hw driver like v3d, I think it makes a lot
>>>> sense to have it as a helper. I believe udl and a future simpledrm can
>>>> also make use of it.
>>>
>>> FWIW, I think etnaviv at least could use this too.
>>>
>>> I'm starting to look at panfrost and lima drivers and was trying to
>>> figure out where to start with the GEM code. So I've been comparing
>>> etnaviv, freedreno, and vgem implementations. They are all pretty
>>> similar from what I see. The per driver GEM obj structs certainly are.
>>> I can't bring myself to just copy etnaviv code over and do a
>>> s/etnaviv/panfrost/. So searching around a bit, I ended up on this
>>> thread. This seems to be what I need for panfrost (based on my brief
>>> study).
>>>
>>
>> I gave up on this due to problems with SPI DMA.
>> Eric tried to use it with vkms, but it failed. On his blog he speculates
>> that it might be due to cached CPU mappings:
>> https://anholt.github.io/twivc4/2018/12/03/twiv/
>>
>> For tinydrm I wanted cached mappings, but it might not work that well
>> with shmem. Maybe that's why I had problems with SPI DMA.
> 
> Actually, for tinydrm buffers that are dma-buf exported through prime, I
> really want tinydrm using WC mappings so that vc4 or v3d rendering (now
> supported on Mesa master with kmsro) works.
> 

I thought that the buffer is created by the GPU driver when using PRIME?
And them imported into the tinydrm driver. In which case it doesn't
matter how tinydrm creates it's dumb buffers.

That said, I will stay with CMA buffers for the SPI drivers. It just
works. The reason I looked into shmem, is that tindyrm would then be
able to import buffers that isn't contiguous in memory, thus making it
work with more GPU's. And the reason for cached buffers is that with
partial flushing the rectangle is copied to a separate buffer for
transfer and CPU acccess to WC memory is quite slow. But ofc for these
small buffers it's not that big a deal.

Noralf.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux