On Mon, Jul 18, 2016 at 04:31:11PM +0800, Xishi Qiu wrote: > On 2016/7/18 16:05, Vlastimil Babka wrote: > > > On 07/18/2016 10:00 AM, Xishi Qiu wrote: > >> On 2016/7/18 13:51, Joonsoo Kim wrote: > >> > >>> On Fri, Jul 15, 2016 at 10:47:06AM +0800, Xishi Qiu wrote: > >>>> alloc_migrate_target() is called from migrate_pages(), and the page > >>>> is always from user space, so we can add __GFP_HIGHMEM directly. > >>> > >>> No, all migratable pages are not from user space. For example, > >>> blockdev file cache has __GFP_MOVABLE and migratable but it has no > >>> __GFP_HIGHMEM and __GFP_USER. > >>> > >> > >> Hi Joonsoo, > >> > >> So the original code "gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE;" > >> is not correct? > > > > It's not incorrect. GFP_USER just specifies some reclaim flags, and may perhaps restrict allocation through __GFP_HARDWALL, where the original > > page could have been allocated without the restriction. But it doesn't put the place in an unexpected address range, as placing a non-highmem page into highmem could. __GFP_MOVABLE then just controls a heuristic for placement within a zone. > > > >>> And, zram's memory isn't GFP_HIGHUSER_MOVABLE but has __GFP_MOVABLE. > >>> > >> > >> Can we distinguish __GFP_MOVABLE or GFP_HIGHUSER_MOVABLE when doing > >> mem-hotplug? > > > > I don't understand the question here, can you rephrase with more detail? Thanks. > > > > Hi Joonsoo, Above is answered by Vlastimil. :) > When we do memory offline, and the zone is movable zone, > can we use "alloc_pages_node(nid, GFP_HIGHUSER_MOVABLE, 0);" to alloc a > new page? the nid is the next node. I don't know much about memory offline, but, AFAIK, memory offline could happen on non-movable zone like as ZONE_NORMAL. Perhaps, you can add "if zone of the page is movable zone then alloc with GFP_HIGHUSER_MOVABLE". Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>