On Wed, 29 May 2019 13:05:52 +0800 Michal Hocko wrote: > On Wed 29-05-19 10:40:33, Hillf Danton wrote: > > On Wed, 29 May 2019 00:11:15 +0800 Michal Hocko wrote: > > > On Tue 28-05-19 23:38:11, Hillf Danton wrote: > > > > > > > > In short, I prefer to skip IO mapping since any kind of address range > > > > can be expected from userspace, and it may probably cover an IO mapping. > > > > And things can get out of control, if we reclaim some IO pages while > > > > underlying device is trying to fill data into any of them, for instance. > > > > > > What do you mean by IO pages why what is the actual problem? > > > > > Io pages are the backing-store pages of a mapping whose vm_flags has > > VM_IO set, and the comment in mm/memory.c says: > > /* > > * Physically remapped pages are special. Tell the > > * rest of the world about it: > > * VM_IO tells people not to look at these pages > > * (accesses can have side effects). > > > > OK, thanks for the clarification of the first part of the question. Now > to the second and the more important one. What is the actual concern? > AFAIK those pages shouldn't be on LRU list. The backing pages for GEM object are lru pages, see the function drm_gem_get_pages() in drivers/gpu/drm/drm_gem.c, please. > If they are then they should > be safe to get reclaimed otherwise we would have a problem when > reclaiming them on the normal memory pressure. Yes, Sir, they could be swapped out. > Why is this madvise any different? Now, it is not, thanks to the light you are casting. BR Hillf