On 6/10/21 9:57 PM, Matthew Auld wrote:
Hi,
On Thu, 10 Jun 2021 at 20:02, Thomas Hellström
<thomas.hellstrom@xxxxxxxxxxxxxxx> wrote:
Hi, Matthew!
I got a funny result from the hugepages selftest when trying to break
out some functionality from shmem to make a ttm page pool for
cached-only TTM system bos.
It turns out that shmem computed the pagesizes using the underlying
pages rather than the dma segments, so when I changed that, hugepages
started failing.
https://patchwork.freedesktop.org/series/91227/
But when hacking the page-size computation to use the underlying pages,
it's fine again
https://patchwork.freedesktop.org/series/91336/
It seems like some assumption about huge dma segments is wrong, either
in our page-size calculation, in the selftest or in the actual huge page
setup. Could it be that huge sized segments are assumed to be properly
aligned?
We disabled THP for $reasons, so shrink_thp will pretty much always
skip I think, unless we happen to coalesce enough pages to make a 2M
page. I guess with your change that is somehow more likely now that we
use i915_sg_dma_sizes() and call it after we do the dma_map_sg. I
think the intel iommu driver also does coalescing or something. The
sg_page_sizes is mostly just a heuristic though.
The test failure looks like a bug in the test though, I think since
the object might still be active(gpu_write) we need to also force
SHRINK_ACTIVE, otherwise the shrinker will just ignore the object. The
test did work at some point but I guess has been modified/refactored a
few times.
Ok makes sense. I'll see if I can fix the test then. And yes, the
difference in behavior is most likely due to the iommu driver coalescing
stuff.
/Thomas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx