"Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> writes: > "Huang, Ying" <ying.huang@xxxxxxxxx> writes: > >> "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> writes: >> + */ > > .... > >>> +int next_demotion_node(int node) >>> +{ >>> + struct demotion_nodes *nd; >>> + int target; >>> + >>> + if (!node_demotion) >>> + return NUMA_NO_NODE; >>> + >>> + nd = &node_demotion[node]; >>> + >>> + /* >>> + * node_demotion[] is updated without excluding this >>> + * function from running. >>> + * >>> + * Make sure to use RCU over entire code blocks if >>> + * node_demotion[] reads need to be consistent. >>> + */ >>> + rcu_read_lock(); >>> + /* >>> + * If there are multiple target nodes, just select one >>> + * target node randomly. >>> + * >>> + * In addition, we can also use round-robin to select >>> + * target node, but we should introduce another variable >>> + * for node_demotion[] to record last selected target node, >>> + * that may cause cache ping-pong due to the changing of >>> + * last target node. Or introducing per-cpu data to avoid >>> + * caching issue, which seems more complicated. So selecting >>> + * target node randomly seems better until now. >>> + */ >>> + target = node_random(&nd->preferred); >> >> Don't find code to optimize node_random() for weight == 1 case, forget >> to do that? > > I guess you suggested to do that as the patch for node_random or did I > got the review feedback wrong? Yes. > https://lore.kernel.org/linux-mm/87y1wdn30p.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > > The change for node_random will be patch outside this series. I think we can include it in this series. Because the series provide more information about why we need the change. Best Regards, Huang, Ying