Re: [PATCH v3 5/7] drm/sun4i: Rely on dma interconnect for our RAM offset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/02/2019 15:41, Maxime Ripard wrote:
Hi Robin,

Thanks for your feedback!

On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
On 11/02/2019 15:02, Maxime Ripard wrote:
Now that we can express our DMA topology, rely on those property instead of
hardcoding an offset from the dma_addr_t which wasn't really great.

We still need to add some code to deal with the old DT that would lack that
property, but we move the offset to the DRM device dma_pfn_offset to be
able to rely on just the dma_addr_t associated to the GEM object.

Acked-by: Daniel Vetter <daniel.vetter@xxxxxxxx>
Signed-off-by: Maxime Ripard <maxime.ripard@xxxxxxxxxxx>
---
   drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
   1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
index 9e9255ee59cd..1846a1b30fea 100644
--- a/drivers/gpu/drm/sun4i/sun4i_backend.c
+++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
@@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
   	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
   	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
-	/*
-	 * backend DMA accesses DRAM directly, bypassing the system
-	 * bus. As such, the address range is different and the buffer
-	 * address needs to be corrected.
-	 */
-	paddr -= PHYS_OFFSET;
-
   	if (fb->format->is_yuv)
   		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
@@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
   	dev_set_drvdata(dev, backend);
   	spin_lock_init(&backend->frontend_lock);
+	if (of_find_property(dev->of_node, "interconnects", NULL)) {
+		/*
+		 * This assume we have the same DMA constraints for all our the
+		 * devices in our pipeline (all the backends, but also the
+		 * frontends). This sounds bad, but it has always been the case
+		 * for us, and DRM doesn't do per-device allocation either, so
+		 * we would need to fix DRM first...
+		 */
+		ret = of_dma_configure(drm->dev, dev->of_node, true);

It would be even nicer if we could ensure that drm->dev originates from a DT
node which has the appropriate interconnects property itself, such that we
can assume it's already configured correctly.

The thing is drm->dev comes from a node in the DT that is a virtual
node, and therefore doesn't have any resources attached, so I'm not
sure we have any other way, unfortunately.

Right, I appreciate that it may not be feasible to swizzle drm->dev for one of the 'real' component devices, but what I was also thinking was that since the virtual device node effectively represents the aggregation of the other component devices, we could just say that it also has to have its own link to the MBUS interconnect (with the ID of the pipeline entrypoint it's associated with, I guess). That ought to be enough to get dma_configure() to do the job, and in fairness is no *less* accurate a description of the hardware, even if might look a little funky to some.

Robin.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux