On 07.04.2015 16:52, Christian Gmeiner wrote:
2015-04-07 16:38 GMT+02:00 Alex Deucher <alexdeucher@xxxxxxxxx>:
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach <l.stach@xxxxxxxxxxxxxx> wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux <linux@xxxxxxxxxxxxxxxx>:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have
multiple rendering backend states, which can be selected by the
pipe switch command, so there is no strict mapping between the
user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which
supports both 2D and 3D together. It's always bugged me that
etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be
done the right way - thanks Russell.
So maybe you/we need to rework how the driver is designed regarding
cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe
type. On the dove we
should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM
device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see
use-cases where we need to use the GPU2D in MESA to do things the GPU3D
is incapable of) we would then need a lot more DMA-BUFs to get buffers
across the devices. This is a waste of resources and complicates things
a lot as we would then have to deal with DMA-BUF fences just to get the
synchronization right, which is a no-brainer if we are on the same DRM
device.
Also it does not allow us to make any simplifications to the userspace
API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of
executing in both 2D and 3D state. If userspace takes advantage of that
one could leave the sync between both engines to the FE, which is a good
thing as this allows the kernel to do less work. I don't see why we
should throw this away.
Just about all modern GPUs support varying combinations of independent
pipelines and we currently support this just fine via a single device
node in other drm drivers. E.g., modern radeons support one or more
gfx, compute, dma, video decode and video encode engines. What
combination is present depends on the asic.
So if you have multiple GPUs (IP cores with separate IRQ, register
addresses, ..) with
combinations of independent pipelines that would mean that every GPU
gets its own
device node and supports a combinations of independent pipelines.
Yeah, correct. For Radeon it actually depends on how the multiple
GPUs/pipelines are wired up.
If you have multiple GPUs each one usually has a different internal
address space and different resources (VRAM, special memory regions like
LDS/GDS etc...) and a couple of different pipelines.
It won't make sense to create a separate device node for each pipeline,
cause as noted that would mean we have to share all resources using
DMA_buf file descriptors.
Regards,
Christian.
greets
--
Christian Gmeiner, MSc
https://soundcloud.com/christian-gmeiner
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel