Quoting Daniele Ceraolo Spurio (2019-03-13 23:13:16) > This will allow futher simplifications in the uncore handling. > > RFC: if we want to keep the pointer logically separate from the uncore, > we could also move both the regs pointer and the uncore struct > inside a new structure (intel_mmio?) and pass that around instead, or > just take a copy of the pointer > > Cc: Paulo Zanoni <paulo.r.zanoni@xxxxxxxxx> > Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@xxxxxxxxx> > --- > drivers/gpu/drm/i915/i915_drv.c | 6 +++--- > drivers/gpu/drm/i915/i915_drv.h | 6 ++---- > drivers/gpu/drm/i915/i915_irq.c | 22 +++++++++++----------- > drivers/gpu/drm/i915/intel_lrc.c | 6 +++--- > drivers/gpu/drm/i915/intel_uncore.c | 5 ++--- > drivers/gpu/drm/i915/intel_uncore.h | 2 ++ > 6 files changed, 23 insertions(+), 24 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c > index a2e039f523c0..2470c1ef4951 100644 > --- a/drivers/gpu/drm/i915/i915_drv.c > +++ b/drivers/gpu/drm/i915/i915_drv.c > @@ -949,8 +949,8 @@ static int i915_mmio_setup(struct drm_i915_private *dev_priv) > mmio_size = 512 * 1024; > else > mmio_size = 2 * 1024 * 1024; > - dev_priv->regs = pci_iomap(pdev, mmio_bar, mmio_size); > - if (dev_priv->regs == NULL) { > + dev_priv->uncore.regs = pci_iomap(pdev, mmio_bar, mmio_size); > + if (dev_priv->uncore.regs == NULL) { > DRM_ERROR("failed to map registers\n"); > > return -EIO; > @@ -967,7 +967,7 @@ static void i915_mmio_cleanup(struct drm_i915_private *dev_priv) > struct pci_dev *pdev = dev_priv->drm.pdev; > > intel_teardown_mchbar(dev_priv); > - pci_iounmap(pdev, dev_priv->regs); > + pci_iounmap(pdev, dev_priv->uncore.regs); At the very least with moving to the uncore as owner of regs, the setup and cleanup should be moved to intel_uncore. As it stands, it looks reasonable, as the uncore being the arbitrator and debugger of mmio access. Now, what would make Ville and myself happy would be an artificial split of uncore into gpu and display roles, with an arch-arbiter around forcewake itself. i.e. since display doesn't really forcewake it doesn't need to share the pain of the forcewake spinlock -- it still needs some locks around to serialise mmio, but it wants much finer control as locking is a big hindrance wrt flip latency. On the gpu side, we already avoid using the general uncore routines on the hottest paths, but equally do not want latency from display operations. So even the coarse split between display/gpu should have immediate improvements. I'll leave you to judge if the knowledge and data structures gained from that exercise will pay off in future. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx