Quoting Matthew Auld (2017-06-22 12:07:55) > On 21 June 2017 at 23:51, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > Quoting Chris Wilson (2017-06-21 22:49:07) > >> Quoting Matthew Auld (2017-06-21 21:33:36) > >> > Support inserting 1G gtt pages into the 48b PPGTT. > >> > > >> > Signed-off-by: Matthew Auld <matthew.auld@xxxxxxxxx> > >> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > >> > --- > >> > drivers/gpu/drm/i915/i915_gem_gtt.c | 72 ++++++++++++++++++++++++++++++++++--- > >> > drivers/gpu/drm/i915/i915_gem_gtt.h | 2 ++ > >> > 2 files changed, 70 insertions(+), 4 deletions(-) > >> > > >> > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c > >> > index de67084d5fcf..6fe10ee7dca8 100644 > >> > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c > >> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c > >> > @@ -922,6 +922,65 @@ static void gen8_ppgtt_insert_3lvl(struct i915_address_space *vm, > >> > cache_level); > >> > } > >> > > >> > +static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma, > >> > + struct i915_page_directory_pointer **pdps, > >> > + struct sgt_dma *iter, > >> > + enum i915_cache_level cache_level) > >> > +{ > >> > + const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level); > >> > + u64 start = vma->node.start; > >> > + > >> > + do { > >> > + struct gen8_insert_pte idx = gen8_insert_pte(start); > >> > + struct i915_page_directory_pointer *pdp = pdps[idx.pml4e]; > >> > + struct i915_page_directory *pd = pdp->page_directory[idx.pdpe]; > >> > + struct i915_page_table *pt = pd->page_table[idx.pde]; > >> > + dma_addr_t rem = iter->max - iter->dma; > >> > + unsigned int page_size; > >> > + gen8_pte_t encode = pte_encode; > >> > + gen8_pte_t *vaddr; > >> > + u16 index, max; > >> > + > >> > + if (unlikely(vma->page_sizes.sg & I915_GTT_PAGE_SIZE_1G) && > >> > + IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_1G) && > >> > + rem >= I915_GTT_PAGE_SIZE_1G && !(idx.pte | idx.pde)) { > >> > + vaddr = kmap_atomic_px(pdp); > >> > + index = idx.pdpe; > >> > + max = GEN8_PML4ES_PER_PML4; > >> > + page_size = I915_GTT_PAGE_SIZE_1G; > >> > + encode |= GEN8_PDPE_PS_1G; > >> > + } else { > >> > + vaddr = kmap_atomic_px(pt); > >> > + index = idx.pte; > >> > + max = GEN8_PTES; > >> > + page_size = I915_GTT_PAGE_SIZE; > >> > + } > >> > + > >> > + do { > >> > + vaddr[index++] = encode | iter->dma; > >> > + > >> > + start += page_size; > >> > + iter->dma += page_size; > >> > + if (iter->dma >= iter->max) { > >> > + iter->sg = __sg_next(iter->sg); > >> > + if (!iter->sg) > >> > + break; > >> > + > > > > GEM_BUG_ON(iter->sg->length < page_size); > > That should be expected behaviour, in that we need to downgrade to a > smaller page size on the next iteration. It still applies to just above where we set vaddr[index]. It fails here because we have yet decided on our course of action. I still think there is merit in having a confirmation that sg->length does meet our criteria, considering that we set the page_sizes a long time ago. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx