Quoting Tvrtko Ursulin (2018-10-09 09:24:33) > > On 05/10/2018 19:42, Ville Syrjälä wrote: > > On Mon, Oct 01, 2018 at 04:48:21PM +0100, Tvrtko Ursulin wrote: > >> > >> On 01/10/2018 16:37, Chris Wilson wrote: > >>> Quoting Ville Syrjälä (2018-10-01 16:27:43) > >>>> On Mon, Oct 01, 2018 at 04:12:09PM +0100, Chris Wilson wrote: > >>>>> Quoting Ville Syrjälä (2018-10-01 16:03:30) > >>>>>> On Wed, Sep 26, 2018 at 08:50:25AM +0100, Tvrtko Ursulin wrote: > >>>>>>> > >>>>>>> On 25/09/2018 20:37, Ville Syrjala wrote: > >>>>>>> One more thing, do you really need random access for this > >>>>>>> transformation? Or you could walk the sg list as it is? Just if you hit > >>>>>>> a too long chunk you need to copy a trimmed version over and know where > >>>>>>> to continue for the next row. If doable it would be better than having > >>>>>>> to kvmalloc_array. > >>>>>> > >>>>>> I think Chris suggested just using i915_gem_object_get_dma_address() > >>>>>> here. But I'm not sure why we're not using it for rotate_pages() > >>>>>> as well. > >>>>> > >>>>> Tvrtko is opposed to populating the obj->mm.pages cache with no defined > >>>>> release point. I say the mempressure and shrinker should to the right > >>>>> thing, but it's a big if. > >>>> > >>>> OK. > >>>> > >>>> Well, looks to me like i915_gem_object_get_dma_address() is the > >>>> only convenient looking thing for iterating the pages without > >>>> arowning the code in irrelevant details about sgs and whatnot. > >>>> I suppose it should be possible to write some helpers that avoid > >>>> all that and don't need the temp array, but I'm not really > >>>> motivated enough to do that myself. > >>> > >>> Keep it simple and use get_dma_address(). We can find ways to throw away > >>> the cache later if need be. > >> > >> I'd do it straight away. I think cache for a large framebuffer, the kind > >> which needs remapping could be quite big! Even the more fragmented > >> memory the bigger the cache, and so if it sticks around pointlessly for > >> the lifetime of the framebuffer it is a double whammy. > > > > The tree is indexed with the ggtt offset so memory fragmentation > > shouldn't matter I think. Or did I totally miss something? > > I think it is indexed by page index, but the number of tree entries > depends on the number of sg chunks, which in turn depends on the system > memory fragmentation. Consecutive pages are stored as "exceptional" > entries so that is more efficient IIRC, but TBH I don't remember how > many of those it can store before it needs a new tree node. Anyways, if > I am not completely mistaken then the tree metadata size is proportional > to backing store fragmentation, and secondary proportional to object > size. (Due exceptional entry spill.) > > > The relative overhead should be highest for a single page object > > (576/4096 = ~15%). For big objects it should be something around > > .2% AFAICS. > > It is a bit annoying since radix tree does not have a helper to tell us > the number of allocated nodes. I don't remember how I measure this last > time.. Will try to recall. > > > > > The shrinker can throw the cache out for non-pinned fbs. Looks > > like it does consider all fbs active so it tries not to kick > > them out as the first thing though. For pinned fbs the cache > > would remain allocated forever however. > > > > Keeping the cache around should be great for panning within > > large remapped fbs. Although I suppose that panning isn't a > > very common use case these days > > We could just export __i915_gem_object_reset_page_iter, and look at the > required locking when used from rotate/remap and I'd be happy to start > with. But okay, let me first try to recall/re-measure how big are these > trees... Also, the should soon be a generic radix tree implementation so we should be free to swap out the tagged nodes for a more streamline tree. But seriously, the sole argument is that you dislike the caching and you even argue against the caching in situations like this where the cache will be reused! (Since we need to remap each crtc into the single fb.) We can't even randomly call reset_page_iter unless you know you own the iter (since it's common to everyone who has a pin_pages). Revoking unpinned pages under mempressure, i.e. the default shrinker behaviour, seems satisfactory. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx