On Mon, Feb 24, 2014 at 11:40 AM, Christian König <deathsimple@xxxxxxxxxxx> wrote: > Hi Marek, > > Some minor comments on patch 1, 2 and 5, but nothing serious. Patch 3, 4 and > 6 are Reviewed-by: christian König <christian.koenig@xxxxxxx> > > See below for a few in line comments. > > Am 24.02.2014 16:20, schrieb Marek Olšák: > >> This series improves performance for the cases when there is not enough >> VRAM for all buffers. >> >> First of all, I'd like to mention that if you set both VRAM and GTT >> domains for a buffer, you pretty much say you don't care where the buffer >> ends up. It usually makes the performance even worse. >> >> This work was largely benchmark-driven and I tried a lot of ideas before I >> found out which ones work. The patches describe what they do and they're >> quite simple, so I'll just share the results here. >> >> >> Card: Evergreen Redwood (HD 5670), 512 MB of VRAM >> Test: Unigine Heaven 4.0, High settings >> >> 1) 1280x720, 4x MSAA, need 525 MB of VRAM >> >> Without patches: 16.6 FPS >> With patches: 16.6 FPS >> Improvement: 0 % >> >> 2) 1600x900, 4x MSAA, need 642 MB of VRAM >> >> Without patches: 7.1 FPS >> With patches: 9.7 FPS >> Improvement: 36 % >> >> 3) 1920x1080, 4x MSAA, need 743 MB of VRAM >> >> Without patches: 3.7 FPS >> With patches: 5.6 FPS >> Improvement: 51 % >> >> 4) 1600x900, 8x MSAA, need 838 MB of VRAM >> Without patches: 2.9 FPS >> With patches: 4.6 FPS >> Improvement: 58 % >> >> These results don't change if you run the benchmark several times, which >> proves the improvement is stable. >> >> >> To conclude this, here are ideas for future work: >> >> 1) Add virtual memory support for VRAM. Our GPUs support virtual memory, >> which not only solves fragmentation issues, but it also allows each buffer >> to be partially in VRAM and partially in GTT, which becomes more important >> with large buffers like 100 MB. Moving whole buffers back and forth between >> VRAM and GTT is inefficient if you can do it at page granularity. Also, due >> to fragmentation, we can never really use all of VRAM, but only about >> 90-95%. > > > Yeah, I'm also thinking about this for quite some time now. The basic > problem is that while our GPUs support VM they don't support faulting pages > in and continuing (at least nobody got that working reliable so far). E.g. > when you hit a page fault you can't relocate the page and then continue. > Well, for non-scanout buffers we can do scatter gather for vram pages rather than using contiguous buffers. That would at least avoid low memory situations where there is enough vram, just not contiguous. Another option would be to write a ttm defragmentor, but I think we'd have to fix the synchronization issues with bo moves first. Alex > Support for partially resident textures on newer hardware currently works by > splitting the buffer up into smaller buffers in userspace and then actively > checking in the shader if we hit a buffer that's not currently in memory, > but that's not really applicable in the general use case (to much shader > overhead). > > >> 2) Add support for uncached GTT. I think it should improve performance for >> dGPUs under memory pressure, but some testing needs to be done to confirm >> that. Uncached GTT doesn't seem to work for me on Evergreen, but it's said >> to be working on some later chips. > > > Did you try to make the whole GTT uncached or just evicted BOs? Making the > whole GTT uncached probably won't work out of the box, but avoiding setting > the "SNOOPED" flag on those pages might get us better performance while > swapping them into VRAM again. > > Christian. > > >> >> The patches for Mesa will follow later today. Please review. >> >> Marek >> _______________________________________________ >> dri-devel mailing list >> dri-devel@xxxxxxxxxxxxxxxxxxxxx >> http://lists.freedesktop.org/mailman/listinfo/dri-devel > > > _______________________________________________ > dri-devel mailing list > dri-devel@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/dri-devel _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel