Hi, Noralf.
A couple of issues below:
On 05/23/2018 04:34 PM, Noralf Trønnes wrote:
This the beginning of an API for in-kernel clients.
First out is a way to get a framebuffer backed by a dumb buffer.
Only GEM drivers are supported.
The original idea of using an exported dma-buf was dropped because it
also creates an anonomous file descriptor which doesn't work when the
buffer is created from a kernel thread. The easy way out is to use
drm_driver.gem_prime_vmap to get the virtual address, which requires a
GEM object. This excludes the vmwgfx driver which is the only non-GEM
driver apart from the legacy ones. A solution for vmwgfx will have to be
worked out later if it wants to support the client API which it probably
will when we have a bootsplash client.
Couldn't you add vmap() and vunmap() to the dumb buffer API for
in-kernel use rather than using GEM directly?
But the main issue is pinning. It looks like the buffers are going to be
vmapped() for a long time, which requires pinning, and that doesn't work
for some drivers when they bind the framebuffer to a plane, since that
might require pinning in another memory region and the vmap would have
to be torn down. Besides, buffer pinning should really be avoided if
possible:
Since we can't page-fault vmaps, and setting up / tearing down vmaps is
potentially an expensive operation, could we perhaps have a mapping api
that allows the driver to cache vmaps?
vmap() // Indicates that we want to map a bo
begin_access() // Returns a virtual address which may vary between
calls. Allows access. A fast operation. Behind the lines pins / reserves
the bo and returns a cached vmap if the bo didn't move since last
begin_access(), which is the typical case.
end_access() // Disable access. Unpins / unreserves the bo.
vunmap_cached() //Indicates that the map is no longer needed. The driver
can release the cached map.
The idea is that the API client would wrap all bo map accesses with
begin_access() end_access(), allowing for the bo to be moved in between.
/Thomas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx