On Mon, Sep 26, 2011 at 21:56, Rob Clark <rob.clark@xxxxxxxxxx> wrote: > On Mon, Sep 26, 2011 at 2:43 PM, Daniel Vetter <daniel@xxxxxxxx> wrote: >> On Mon, Sep 26, 2011 at 01:18:40PM -0500, Rob Clark wrote: >>> On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark <rob.clark@xxxxxxxxxx> wrote: >>> > +/** >>> > + * drm_gem_get_pages - helper to allocate backing pages for a GEM object >>> > + * @obj: obj in question >>> > + * @gfpmask: gfp mask of requested pages >>> > + */ >>> > +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) >>> > +{ >>> >>> >>> Hmm, in working through tiled buffer support over the weekend, I think >>> I've hit a case where I want to decouple the physical size (in terms >>> of pages) from virtual size.. which means I don't want to rely on the >>> same obj->size value for mmap offset creation as for determining # of >>> pages to allocate. >>> >>> Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on >>> drm-core-next yet, I think the more straightforward thing to do is add >>> a size (or numpages) arg to the get/put_pages functions resubmit this >>> patch.. >> >> I think making obj->size not be the size of the backing storage is the >> wrong thing. In i915 we have a similar issue with tiled regions on gen2/3: >> The minimal sizes for these are pretty large, so objects are smaller than >> a the tiled region. So backing storage and maps are both obj->size large, >> but the area the object occupies in the gtt may be much large. To put some >> packing storage behind the not-used region (which sounds like what you're >> trying to do here) we simply use one global dummy page. Userspace should >> never use this, so that's fine. >> >> Or is this a case of insane arm hw, and you can't actually put the same >> physical page at different gtt locations? > > Well, the "GART" in our case is a bit restrictive.. we can have same > page in multiple locations, but not arbitrary locations. Things need > to go to slot boundaries. So it isn't really possible to get row n+1 > to appear directly after row n. To handle this we just round the > buffer stride up to next 4kb boundary and insert some dummy page the > the remaining slots to pad out to 4kb. > > The only other way I could think of to handle that would be to have a > separate vsize and psize in 'struct drm_gem_object'.. Well I think for this case the solution is simple: Tiling not allowed if userspace is too dumb to properly round the buffer up so it fulfills whatever odd requirement the hw has. I think hiding the fact that certain buffers need more backing storage than a naive userspace might assume is ripe for ugly problems down the road. -Daniel -- Daniel Vetter daniel.vetter@xxxxxxxx - +41 (0) 79 364 57 48 - http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel