On Tue 23-03-21 14:56:54, Christian König wrote: > Am 23.03.21 um 14:41 schrieb Michal Hocko: [...] > > Anyway, I am wondering whether the overall approach is sound. Why don't > > you simply use shmem as your backing storage from the beginning and pin > > those pages if they are used by the device? > > Yeah, that is exactly what the Intel guys are doing for their integrated > GPUs :) > > Problem is for TTM I need to be able to handle dGPUs and those have all > kinds of funny allocation restrictions. In other words I need to guarantee > that the allocated memory is coherent accessible to the GPU without using > SWIOTLB. > > The simple case is that the device can only do DMA32, but you also got > device which can only do 40bits or 48bits. > > On top of that you also got AGP, CMA and stuff like CPU cache behavior > changes (write back vs. write through, vs. uncached). OK, so the underlying problem seems to be that gfp mask (thus mapping_gfp_mask) cannot really reflect your requirements, right? Would it help if shmem would allow to provide an allocation callback to override alloc_page_vma which is used currently? I am pretty sure there will be more to handle but going through shmem for the whole life time is just so much easier to reason about than some tricks to abuse shmem just for the swapout path. -- Michal Hocko SUSE Labs _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx