Bug ID | 107403 |
---|---|
Summary | Quadratic behavior due to leaking fence contexts in reservation objects |
Product | DRI |
Version | XOrg git |
Hardware | Other |
OS | All |
Status | NEW |
Severity | normal |
Priority | medium |
Component | DRM/AMDgpu |
Assignee | dri-devel@lists.freedesktop.org |
Reporter | bas@basnieuwenhuizen.nl |
As part of the Vulkan CTS, radv creates about 30k AMDGPU contexts (about 1-20 live at the same time though). Each of those creates a bunch of fence contexts, one for each ring, to use for fences created from submitted jobs. However, as part of running jobs, fences with those contexts get attached to the vm->root.base.bo->tbo.resv of the corresponding vm. Which means that at some point we have tens of thousands of fences attached to it as they never get removed. They only ever get deduplicated with a later fence from the same fence context, so fences from destroyed contexts never get removed. Then in amdgpu_gem_va_ioctl -> amdgpu_vm_clear_freed -> amdgpu_vm_bo_update_mapping we do an amdgpu_sync_resv, which tries to add that to an amdgpu_sync object. Which only has a 16-entry hashtable, so adding the fences to the hashtable results in quadratic behavior. Combine this with doing sparse buffer tests at the end, which do lots of VA operations this results in tests taking 20+ minuts. So I could reduce the number of amdgpu contexts a bit in radv, but the bigger issue in my opnion is that we are pretty much leaking and never reclaiming the fences. Any idea how to best remove some signalled fences?
You are receiving this mail because:
- You are the assignee for the bug.
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel