Re: [PATCH 2/6] dt-bindings: display: msm: gmu: add optional ocmem property

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 19, 2019 at 01:21:20PM -0700, Rob Clark wrote:
> On Wed, Jun 19, 2019 at 1:17 PM Rob Herring <robh+dt@xxxxxxxxxx> wrote:
> >
> > On Sun, Jun 16, 2019 at 7:29 AM Brian Masney <masneyb@xxxxxxxxxxxxx> wrote:
> > >
> > > Some A3xx and A4xx Adreno GPUs do not have GMEM inside the GPU core and
> > > must use the On Chip MEMory (OCMEM) in order to be functional. Add the
> > > optional ocmem property to the Adreno Graphics Management Unit bindings.
> > >
> > > Signed-off-by: Brian Masney <masneyb@xxxxxxxxxxxxx>
> > > ---
> > >  Documentation/devicetree/bindings/display/msm/gmu.txt | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/Documentation/devicetree/bindings/display/msm/gmu.txt b/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > index 90af5b0a56a9..c746b95e95d4 100644
> > > --- a/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > +++ b/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > @@ -31,6 +31,10 @@ Required properties:
> > >  - iommus: phandle to the adreno iommu
> > >  - operating-points-v2: phandle to the OPP operating points
> > >
> > > +Optional properties:
> > > +- ocmem: phandle to the On Chip Memory (OCMEM) that's present on some Snapdragon
> > > +         SoCs. See Documentation/devicetree/bindings/soc/qcom/qcom,ocmem.yaml.
> >
> > We already have a couple of similar properties. Lets standardize on
> > 'sram' as that is what TI already uses.
> >
> > Also, is the whole OCMEM allocated to the GMU? If not you should have
> > child nodes to subdivide the memory.
> >
> 
> iirc, downstream a large chunk of OCMEM is statically allocated for
> GPU.. the remainder is dynamically allocated for different use-cases.
> The upstream driver Brian is proposing only handles the static
> allocation case

It appears that the GPU expects to use a specific region of ocmem,
specifically starting at 0. The freedreno driver allocates 1MB of
ocmem on the Nexus 5 starting at ocmem address 0. As a test, I
changed the starting address to 0.5MB and kmscube shows only half the
cube, and four wide black bars across the screen:

https://www.flickr.com/photos/masneyb/48100534381/

> (and I don't think we have upstream support for the various audio and
> video use-cases that used dynamic OCMEM allocation downstream)

That's my understanding as well.

> Although maybe we should still have a child node to separate the
> statically and dynamically allocated parts?  I'm not sure what would
> make the most sense..

Given that the GPU is expecting a fixed address in ocmem, perhaps it
makes sense to have the child node. How about this based on the
sram/sram.txt bindings?

  ocmem: ocmem@fdd00000 {
    compatible = "qcom,msm8974-ocmem";

    reg = <0xfdd00000 0x2000>, <0xfec00000 0x180000>;
    reg-names = "ctrl", "mem";

    clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>, <&mmcc OCMEMCX_OCMEMNOC_CLK>;
    clock-names = "core", "iface";

    gmu-sram@0 {
      reg = <0x0 0x100000>;
      pool;
    };

    misc-sram@0 {
      reg = <0x100000 0x080000>;
      export;
    };
  };

I marked the misc pool as export since I've seen in the downstream ocmem
sources a reference to their closed libsensors that runs in userspace.

Looking at the sram bindings led me to the genalloc API
(Documentation/core-api/genalloc.rst). I wonder if this is the way that
this should be done?

Brian



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux