When CONFIG_INIT_ON_ALLOC_DEFAULT_ON the GMU memory allocator runs afoul of cache coherency issues because it is mapped as write-combine without clearing the cache after it was zeroed. Rather than duplicate the hacky workaround we use in the GEM allocator for the same reason it turns out that we don't need to have a bespoke memory allocator for the GMU anyway. It uses a flat, global address space and there are only two relatively minor allocations anyway. In short, this is essentially what the DMA API was created for so replace a bunch of memory management code with two calls to allocate and free DMA memory and we're fine. In a previous version of this series I added the dma-ranges property to the device tree file for the GMU and updated the bindings to YAML. Rob correctly pointed out that we should set the dma mask instead of using dma-ranges so I removed that bit, but I'm still pushing the YAML conversion because it is good and we'll eventually need it anyway. v3: Fix YAML description per RobH and remove dma-ranges and replace it with the correct DMA mask in the GMU device. Convert the iova type to a dma_attr_t to make it 32 bit friendly. v2: Fix the example bindings for dma-ranges - the third item is the size Pass false to of_dma_configure so that it fails probe if the DMA region is not set up. Jordan Crouse (2): dt-bindings: display: msm: Convert GMU bindings to YAML drm/msm/a6xx: Use the DMA API for GMU memory objects .../devicetree/bindings/display/msm/gmu.txt | 116 ------------------- .../devicetree/bindings/display/msm/gmu.yaml | 123 +++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 115 +++---------------- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 7 +- 4 files changed, 138 insertions(+), 223 deletions(-) delete mode 100644 Documentation/devicetree/bindings/display/msm/gmu.txt create mode 100644 Documentation/devicetree/bindings/display/msm/gmu.yaml -- 2.7.4