Quoting Tvrtko Ursulin (2017-07-27 11:46:03) > > On 27/07/2017 10:25, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2017-07-27 10:05:00) > >> From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > >> > >> Yet another attempt to get this series reviewed and merged... > >> > >> I've heard Vulkan might be creating a lot of userptr objects so might be > >> interesting to check what benefit it brings to those use cases. > > > > Optimist :) My thinking is that this should only impact get_pages -> > > vma_bind, which is supposed a rare operation, and if should happen as > > part of the steady state that we have too many sg in a chain is just one > > of the myriad little paper cuts :) > > I did not try to sell any performance benefits. There might be some > micro (pico?) ones due less walking and/or smaller memory footprint. But > slab reduction is the main point. It's not a big one but why not do it > if we can. And it also makes the userptr consistent with out other bos > in this respect. And is simpler code in i915_gem_userptr.c. It's definitely beneficial, no doubt. Just a new minor user isn't all that exciting ;) > >> As an introduction, this allows i915 to create fewer sg table entries for the bo > >> backing store representation. As such it primarily saves kernel slab memory. > >> > >> When we added this optimisation to normal i915 bos, the savings were as far as > >> I remember around 1-2MiB of slab after booting to KDE desktop, and 2-4Mib on > >> neverball (game) main screen (or maybe it was while playing). > > > > I think we also want to think about the aspect where we are creating > > objects of multiple 1G huge pages, so we are going to run into the sg > > limits very quickly. > > You mean changing the core struct to allow larger chunks? Haven't the > core kernel people already rolled their eyes on our sg table misuse? :) They fortunately haven't seen ours... But yes Linus did rant about doing 2+G of dma as being wrong... Too bad. It doesn't stop it from being the interface into dma remapping etc. :| -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx