On Wed, Jul 28, 2021 at 06:12:21PM +0200, Michal Hocko wrote: > On Wed 28-07-21 22:11:56, Feng Tang wrote: > > On Wed, Jul 28, 2021 at 02:31:03PM +0200, Michal Hocko wrote: > > > [Sorry for a late review] > > > > Not at all. Thank you for all your reviews and suggestions from v1 > > to v6! > > > > > On Mon 12-07-21 16:09:29, Feng Tang wrote: > > > [...] > > > > @@ -1887,7 +1909,8 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) > > > > /* Return the node id preferred by the given mempolicy, or the given id */ > > > > static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) > > > > { > > > > - if (policy->mode == MPOL_PREFERRED) { > > > > + if (policy->mode == MPOL_PREFERRED || > > > > + policy->mode == MPOL_PREFERRED_MANY) { > > > > nd = first_node(policy->nodes); > > > > } else { > > > > /* > > > > > > Do we really want to have the preferred node to be always the first node > > > in the node mask? Shouldn't that strive for a locality as well? Existing > > > callers already prefer numa_node_id() - aka local node - and I belive we > > > shouldn't just throw that away here. > > > > I think it's about the difference of 'local' and 'prefer/perfer-many' > > policy. There are different kinds of memory HW: HBM(High Bandwidth > > Memory), normal DRAM, PMEM (Persistent Memory), which have different > > price, bandwidth, speed etc. A platform may have two, or all three of > > these types, and there are real use case which want memory comes > > 'preferred' node/nodes than the local node. > > > > And good point for 'local node', if the 'prefer-many' policy's > > nodemask has local node set, we should pick it han this > > 'first_node', and the same semantic also applies to the other > > several places you pointed out. Or do I misunderstand you point? > > Yeah. Essentially what I am trying to tell is that for > MPOL_PREFERRED_MANY you simply want to return the given node without any > alternation. That node will be used for the fallback zonelist and the > nodemask would make sure we won't get out of the policy. I think I got your point now :) With current mainline code, the 'prefer' policy will return the preferred node. For 'prefer-many', we would like to keep the similar semantic, that the preference of node is 'preferred' > 'local' > all other nodes. There is some customer use case, whose platform has both DRAM and cheaper, bigger and slower PMEM, and they anlayzed the hotness of their huge data, and they want to put huge cold data into the PMEM, and only fallback to DRAM as the last step. The HW topology could be simplified like this: Socket 0: Node 0 (CPU + 64GB DRAM), Node 2 (512GB PMEM) Socket 1: Node 1 (CPU + 64GB DRAM), Node 3 (512GB PMEM) E.g they want to allocate memory for colde application data with 'prefer-many' policy + 0xC nodemask (N2+N3 PMEM nodes), so no matter the application is running on Node 0 or Node 1, the 'local' node only has DRAM which is not their preference, and want a preferred-->local-->others order. Thanks, Feng > -- > Michal Hocko > SUSE Labs