Re: [etnaviv-next v13 7/7] drm/etnaviv: Add support for vivante GPU cores attached via PCI(e)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2024/2/13 22:38, Maxime Ripard wrote:
On Sat, Feb 10, 2024 at 12:25:33AM +0800, Sui Jingfeng wrote:
On 2024/2/9 23:15, Maxime Ripard wrote:
On Fri, Feb 09, 2024 at 12:02:48PM +0100, Daniel Vetter wrote:
On Thu, Feb 08, 2024 at 04:27:02PM +0100, Maxime Ripard wrote:
On Wed, Feb 07, 2024 at 10:35:49AM +0100, Daniel Vetter wrote:
On Wed, Feb 07, 2024 at 01:27:59AM +0800, Sui Jingfeng wrote:
The component helper functions are the glue, which is used to bind multiple
GPU cores to a virtual master platform device. Which is fine and works well
for the SoCs who contains multiple GPU cores.

The problem is that usperspace programs (such as X server and Mesa) will
search the PCIe device to use if it is exist. In other words, usperspace
programs open the PCIe device with higher priority. Creating a virtual
master platform device for PCI(e) GPUs is unnecessary, as the PCI device
has been created by the time drm/etnaviv is loaded.

we create virtual platform devices as a representation for the vivante GPU
ip core. As all of subcomponent are attached via the PCIe master device,
we reflect this hardware layout by binding all of the virtual child to the
the real master.

Signed-off-by: Sui Jingfeng <sui.jingfeng@xxxxxxxxx>
Uh so my understanding is that drivers really shouldn't create platform
devices of their own. For this case here I think the aux-bus framework is
the right thing to use. Alternatively would be some infrastructure where
you feed a DT tree to driver core or pci subsystem and it instantiates it
all for you correctly, and especially with hotunplug all done right since
this is pci now, not actually part of the soc that cannot be hotunplugged.
I don't think we need intermediate platform devices at all. We just need
to register our GPU against the PCI device and that's it. We don't need
a platform device, we don't need the component framework.
Afaik that's what this series does. The component stuff is for the
internal structure of the gpu ip, so that the same modular approach that
works for arm-soc also works for pci chips.
But there should be a single PCI device, while we have multiple "DT"
devices, right? Or is there several PCI devices too on that PCI card?

There is only a single PCI(e) device on that PCI(e) card, this single
PCI(e) device is selected as the component master. All other Hardware IP
blocks are shipped by the single PCI(e) master. It may includes Display
controllers, GPUs, video decoders, HDMI display bridges hardware unit etc.

But all of those Hardware IP share the same MMIO registers PCI BAR, this
PCI BAR is a kind of PCI(e) MEM resource. It is a relative *big* chunk,
as large as 32MB in address ranges for the JingJia Macro dGPU. Therefore,
I break the whole registers memory(MMIO) resource into smaller pieces by
creating platform device manually, manually created platform device is
called as virtual child in this series.

In short, we cut the whole into smaller piece, each smaller piece is a
single hardware IP block, thus deserve a single device driver. We will
have multiple platform devices if the dGPU contains multiple hardware
IP block. On the driver side, we bind all of the scattered driver module
with component.
That's kind of my point then. If there's a single device, there's no
need to create intermediate devices and use the component framework to
tie them all together. You can have a simpler approach where you create
a function that takes the memory area it operates on (and whatever
additional resource it needs: interrupt, clocks, etc.) and call that
directly from the PCIe device probe, and the MMIO device bind.


Yes, you are right. I have implemented the method just as you told me at
V12 of this series (see 0004 patch at [1]). But at V13, I suddenly realized
that the component code path plus(+) non-component code path yield a
"side-by-side" implement. The old non-component approach can not support
binding multiple sub-core, it can only support one Vivante GPU IP core case.
But there are dGPU which shipping two identical core.

While, the component-based approach implemented at this version is the most
universal and the most flexible and extensible. We could bind a virtual
display driver and/or a real display driver to the real master (refer to the
PCI(e) device).  The PCI(e) device is responsible for present the DRM service
to userspace, like a leader or agent. All other sub hardware and software are
hiding behind of the master.


Besides, Lucas asked me implement the driver like this way at V10 (see [2])

Lucas said:

"My favorite option would be to just always use the component path
even when the GPU is on a PCI device to keep both paths mostly aligned.
One could easily image both a 3D and a 2D core being made available
though the same PCI device."

Lucas, are you watching? How about this version? Can you help to review please?


[1] https://patchwork.freedesktop.org/series/127084/

[2] https://lore.kernel.org/dri-devel/0f1095ef333da7ea103486a1121ca9038815e57c.camel@xxxxxxxxxxxxxx/


Maxime



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux