https://bugzilla.kernel.org/show_bug.cgi?id=214425 --- Comment #3 from Rafael Ristovski (rafael.ristovski@xxxxxxxxx) --- (In reply to Martin Doucha from comment #2) > (In reply to Rafael Ristovski from comment #1) > > According to amdgpu devs, this is a feature where the allocated pages are > > kept around in case they are needed later on. TTM is able to release the > > memory in case the memory pressure increases. > > I understand the logic behind keeping idle buffers allocated for a while. > But it does not make sense to keep them for hours after last use and the > release mechanism on increased memory pressure does not seem to be working. > > When I run a large compilation overnight, starting from a fresh reboot and > shutting down all graphics software including the X server, I'll often come > back in the morning to find that 70% of all RAM is allocated in idle TTM > buffers and GCC is stuck swapping for hours. The TTM buffers were likely > allocated by some GPU-accelerated build computation halfway through the > night. But this is harder to reproduce than the games I've mentioned in the > initial bugreport. Indeed, I too run into situations where even if I purposefully trigger an OOM situation just to get the TTM "cache" to evict itself through memory pressure, _it still does not end up releasing all of the memory_. There are also the following two sysfs files, simply reading them triggers an eviction of GTT/VRAM: > cat /sys/kernel/debug/dri/0/amdgpu_evict_vram > cat /sys/kernel/debug/dri/0/amdgpu_evict_gtt this can be confirmed as working with tools like `radeontop`/`nvtop`. However, this once again does not release the TTM buffers. As you can see in the issue I linked, I never got a reply about a mechanism to manually release TTM memory. I will attempt coercing an answer on IRC, perhaps I will have better luck asking directly there. -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.