On 04.12.24 10:15, Oscar Salvador wrote:
On Wed, Dec 04, 2024 at 10:03:28AM +0100, Vlastimil Babka wrote:
On 12/4/24 09:59, Oscar Salvador wrote:
On Tue, Dec 03, 2024 at 08:19:02PM +0100, David Hildenbrand wrote:
It was always set using "GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL",
and I removed the same flag combination in #2 from memory offline code, and
we do have the exact same thing in do_migrate_range() in
mm/memory_hotplug.c.
We should investigate if__GFP_HARDWALL is the right thing to use here, and
if we can get rid of that by switching to GFP_KERNEL in all these places.
Why would not we want __GFP_HARDWALL set?
Without it, we could potentially migrate the page to a node which is not
part of the cpuset of the task that originally allocated it, thus violating the
policy? Is not that a problem?
The task doing the alloc_contig_range() will likely not be the same task as
the one that originally allocated the page, so its policy would be
different, no? So even with __GFP_HARDWALL we might be already migrating
outside the original tasks's constraint? Am I missing something?
Yes, that is right, I thought we derive the policy from the old page
somehow when migrating it, but reading the code does not seem to be the
case.
Looking at prepare_alloc_pages(), if !ac->nodemask, which would be the
case here, we would get the policy from the current task
(alloc_contig_range()) when cpusets are enabled.
So yes, I am a bit puzzled why __GFP_HARDWALL was chosen in the first
place.
I suspect because "GFP_USER" felt like the appropriate thing to do.
Before:
commit f90b1d2f1aaaa40c6519a32e69615edc25bb97d5
Author: Paul Jackson <pj@xxxxxxx>
Date: Tue Sep 6 15:18:10 2005 -0700
[PATCH] cpusets: new __GFP_HARDWALL flag
Add another GFP flag: __GFP_HARDWALL.
GFP_USER and GFP_KERNEL were the same thing. But memory
offlining/alloc_contig were added later.
--
Cheers,
David / dhildenb