On Fri, May 19, 2017 at 11:53:17AM +0300, Joonas Lahtinen wrote: > On ke, 2017-05-17 at 14:02 +0100, Chris Wilson wrote: > > Older gen use a physical address for the hardware status page, for which > > we use cache-coherent writes. As the writes are into the cpu cache, we use > > a normal WB mapped page to read the HWS, used for our seqno tracking. > > > > Anecdotally, I observed lost breadcrumbs writes into the HWS on i965gm, > > which so far have not reoccurred with this patch. How reliable that > > evidence is remains to be seen. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > <SNIP> > > > @@ -1091,17 +1094,22 @@ static int init_status_page(struct intel_engine_cs *engine) > > > > static int init_phys_status_page(struct intel_engine_cs *engine) > > { > > - struct drm_i915_private *dev_priv = engine->i915; > > + struct page *page; > > > > - GEM_BUG_ON(engine->id != RCS); > > Was this removal deliberate? Yes, at this point we are purely engine local and generic. It doesn't make sense to allocate a page for a non-existent engine, but in theory since we aren't touching HW here, it will work. It's only the application to HW that is restricted now. > > + page = alloc_page(GFP_KERNEL | __GFP_ZERO); > > + if (!page) > > + return -ENOMEM; > > > > - dev_priv->status_page_dmah = > > - drm_pci_alloc(&dev_priv->drm, PAGE_SIZE, PAGE_SIZE); > > - if (!dev_priv->status_page_dmah) > > + engine->status_page.dma_addr = > > + dma_map_page(engine->i915->drm.dev, page, 0, PAGE_SIZE, > > + PCI_DMA_BIDIRECTIONAL); > > + if (dma_mapping_error(engine->i915->drm.dev, > > + engine->status_page.dma_addr)) { > > + __free_page(page); > > return -ENOMEM; > > Nitpicking, but -ENOSPC would be more accurate? More often I have seen ENOMEM from mapping than an actual out-of-space. ENOSPC has a special meaning for me of being out of GTT space (too long spent in execbuf). But what can we do if an API doesn't report the true error? -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx