On 16 March 2018 at 08:43, Daniel Vetter <daniel@xxxxxxxx> wrote: > On Thu, Mar 15, 2018 at 06:20:09PM -0700, James Xiong wrote: >> From: "Xiong, James" <james.xiong@xxxxxxxxx> >> >> With gem_reuse enabled, when a buffer size is different than >> the sizes of buckets, it is aligned to the next bucket's size, >> which means about 25% more memory than the requested is allocated >> in the worst senario. For example: >> >> Orignal size Actual >> 32KB+1Byte 40KB >> . >> . >> . >> 8MB+1Byte 10MB >> . >> . >> . >> 96MB+1Byte 112MB >> >> This is very memory expensive and make the reuse feature less >> favorable than it deserves to be. >> >> This series aligns the reuse buffer size on page size instead to >> save memory. Performed gfxbench tests on Gen9 without LLC, the >> performances and reuse ratioes (reuse count/allocation count) were >> same as before, saved memory usage by 1% ~ 7%(gl_manhattan: peak >> allocated memory size was reduced from 448401408 to 419078144). >> >> v2: split the patch to a series of small functional changes (Chris) > > Mesa gen driver stopped using the libdrm buffer allocator. The gen2/3 > driver still uses it, but I think that's not the one you benchmarked. The > 17.1 release was the first one with that change. > > I think you want to port your changes over to mesa to future proof it, > merging this to upstream makes little sense. Perhaps it can have in both? After all i915, libva and beignet still make use of libdrm_intel. The Mesa copy has changed drastically so this series might need serious rework. -Emil _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel