On Wed, Jun 08, 2016 at 11:41:01AM +0200, Daniel Vetter wrote: > On Fri, Jun 03, 2016 at 05:55:21PM +0100, Chris Wilson wrote: > > Our GPUs impose certain requirements upon buffers that depend upon how > > exactly they are used. Typically this is expressed as that they require > > a larger surface than would be naively computed by pitch * height. > > Normally such requirements are hidden away in the userspace driver, but > > when we accept pointers from strangers and later impose extra conditions > > on them, the original client allocator has no idea about the > > monstrosities in the GPU and we require the userspace driver to inform > > the kernel how many padding pages are required beyond the client > > allocation. > > > > v2: Long time, no see > > v3: Try an anonymous union for uapi struct compatability > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > Hm, where's the userspace for this? Commit message should elaborate imo a > bit more on what's going on here ... ddx, igt both posted. For dri3 the client passes us a buffer with one size that may not match all uses (but is sufficient for its intended). At the moment we reject it, but I could allow it through and pad the missing pages in the GTT instead (ala lazy fencing). The earliest motivation for this was for OpenCL wrapping blobs of userspace and trying to manage the similar problem that the client memory may not match the actual requirements of the GPU. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx