On 2022-05-20 05:42, Christian König wrote:
In theory we should allow much more than that. The problem is just
that we can't.
We have the following issues:
1. For swapping out stuff we need to make sure that we can allocate
temporary pages.
Because of this TTM has a fixed 50% limit where it starts to unmap
memory from GPUs.
So currently even with a higher GTT limit you can't actually use this.
2. Apart from the test case of allocating textures with increasing
power of two until it fails we also have a bunch of extremely stupid
applications.
E.g. stuff like looking at the amount of memory available and
trying preallocate everything.
I'm working on this for years, but there aren't easy solutions to
those issues. Felix has opted out for adding a separate domain for KFD
allocations, but sooner or later we need to find a solution which
works for everybody.
For the record, the reason I added a new domain is, because the GTT
limit otherwise applies to memory, that isn't even managed by TTM
(userptrs) and isn't under TTM's 50% system memory limit in the first place.
Regards,
Felix
Christian.
Am 20.05.22 um 11:14 schrieb Marek Olšák:
Ignore the silly tests. We only need to make sure games work. The
current minimum requirement for running modern games is 8GB of GPU
memory. Soon it will be 12GB. APUs will need to support that.
Marek
On Fri., May 20, 2022, 03:52 Christian König,
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
Am 19.05.22 um 16:34 schrieb Alex Deucher:
> The current somewhat strange logic is in place because certain
> GL unit tests for large textures can cause problems with the
> OOM killer since there is no way to link this memory to a
> process. The problem is this limit is often too low for many
> modern games on systems with more memory so limit the logic to
> systems with less than 8GB of main memory. For systems with 8
> or more GB of system memory, set the GTT size to 3/4 of system
> memory.
It's unfortunately not only the unit tests, but some games as well.
3/4 of total system memory sounds reasonable to be, but I'm 100%
sure
that this will break some tests.
Christian.
>
> Signed-off-by: Alex Deucher <alexander.deucher@xxxxxxx>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 25
++++++++++++++++++++-----
> 1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 4b9ee6e27f74..daa0babcf869 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1801,15 +1801,30 @@ int amdgpu_ttm_init(struct
amdgpu_device *adev)
> /* Compute GTT size, either bsaed on 3/4th the size of
RAM size
> * or whatever the user passed on module init */
> if (amdgpu_gtt_size == -1) {
> + const u64 eight_GB = 8192ULL * 1024 * 1024;
> struct sysinfo si;
> + u64 total_memory, default_gtt_size;
>
> si_meminfo(&si);
> - gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB <<
20),
> - adev->gmc.mc_vram_size),
> - ((uint64_t)si.totalram *
si.mem_unit * 3/4));
> - }
> - else
> + total_memory = (u64)si.totalram * si.mem_unit;
> + default_gtt_size = total_memory * 3 / 4;
> + /* This somewhat strange logic is in place
because certain GL unit
> + * tests for large textures can cause problems
with the OOM killer
> + * since there is no way to link this memory to a
process.
> + * The problem is this limit is often too low for
many modern games
> + * on systems with more memory so limit the logic
to systems with
> + * less than 8GB of main memory.
> + */
> + if (total_memory < eight_GB) {
> + gtt_size =
min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
> + adev->gmc.mc_vram_size),
> + default_gtt_size);
> + } else {
> + gtt_size = default_gtt_size;
> + }
> + } else {
> gtt_size = (uint64_t)amdgpu_gtt_size << 20;
> + }
>
> /* Initialize GTT memory pool */
> r = amdgpu_gtt_mgr_init(adev, gtt_size);