The shmem subsystem supports relocating pages on swap-in in case it was loaded into the wrong zone. This was implemented 2 years ago in: commit bde05d1ccd512696b09db9dd2e5f33ad19152605 Author: Hugh Dickins <hughd@xxxxxxxxxx> Date: Tue May 29 15:06:38 2012 -0700 shmem: replace page if mapping excludes its zone If a driver requires pages to be in a specific zone, they _must_ set the correct GFP flags (like __GFP_DMA32) in mapping_gfp_mask(mapping). shmem will make sure that any page leaving the swap-cache is located in a compatible zone. Signed-off-by: David Herrmann <dh.herrmann@xxxxxxxxx> --- drivers/gpu/drm/drm_gem.c | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 9909bef..a6146ab 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -473,25 +473,6 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) if (IS_ERR(p)) goto fail; pages[i] = p; - - /* There is a hypothetical issue w/ drivers that require - * buffer memory in the low 4GB.. if the pages are un- - * pinned, and swapped out, they can end up swapped back - * in above 4GB. If pages are already in memory, then - * shmem_read_mapping_page_gfp will ignore the gfpmask, - * even if the already in-memory page disobeys the mask. - * - * It is only a theoretical issue today, because none of - * the devices with this limitation can be populated with - * enough memory to trigger the issue. But this BUG_ON() - * is here as a reminder in case the problem with - * shmem_read_mapping_page_gfp() isn't solved by the time - * it does become a real issue. - * - * See this thread: http://lkml.org/lkml/2011/7/11/238 - */ - BUG_ON((gfpmask & __GFP_DMA32) && - (page_to_pfn(p) >= 0x00100000UL)); } return pages; -- 1.9.3 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel