On 2019/10/30 18:14, Peter Zijlstra wrote: > On Wed, Oct 30, 2019 at 05:34:28PM +0800, Yunsheng Lin wrote: >> When passing the return value of dev_to_node() to cpumask_of_node() >> without checking if the device's node id is NUMA_NO_NODE, there is >> global-out-of-bounds detected by KASAN. >> >> From the discussion [1], NUMA_NO_NODE really means no node affinity, >> which also means all cpus should be usable. So the cpumask_of_node() >> should always return all cpus online when user passes the node id as >> NUMA_NO_NODE, just like similar semantic that page allocator handles >> NUMA_NO_NODE. >> >> But we cannot really copy the page allocator logic. Simply because the >> page allocator doesn't enforce the near node affinity. It just picks it >> up as a preferred node but then it is free to fallback to any other numa >> node. This is not the case here and node_to_cpumask_map will only restrict >> to the particular node's cpus which would have really non deterministic >> behavior depending on where the code is executed. So in fact we really >> want to return cpu_online_mask for NUMA_NO_NODE. >> >> Also there is a debugging version of node_to_cpumask_map() for x86 and >> arm64, which is only used when CONFIG_DEBUG_PER_CPU_MAPS is defined, this >> patch changes it to handle NUMA_NO_NODE as normal node_to_cpumask_map(). >> >> [1] https://lkml.org/lkml/2019/9/11/66 >> Signed-off-by: Yunsheng Lin <linyunsheng@xxxxxxxxxx> >> Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx> >> Acked-by: Michal Hocko <mhocko@xxxxxxxx> >> Acked-by: Paul Burton <paul.burton@xxxxxxxx> # MIPS bits > > Still: > > Nacked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> It seems I still misunderstood your meaning by "We must not silently accept NO_NODE there" in [1]. I am not sure if there is still disagreement that the NO_NODE state for dev->numa_node should exist at all. >From the previous disscussion [2], you seem to propose to do "wild guess" or "fixup" for all devices(including virtual and physcial) with NO_NODE, which means the NO_NODE is needed anymore and should be removed when the "wild guess" or "fixup" is done. So maybe the reason for your nack here it is that there should be no other NO_NODE handling or fixing related to NO_NODE before the "wild guess" or "fixup" process is finished, so making node_to_cpumask_map() NUMA_NO_NODE aware is unnecessary. Or your reason for the nack is still specific to the pcie device without a numa node, the "wild guess" need to be done for this case before making node_to_cpumask_map() NUMA_NO_NODE? Please help to clarify the reason for nack. Or is there still some other reason for the nack I missed from the previous disscussion? Thanks. [1] https://lore.kernel.org/lkml/20191011111539.GX2311@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ [2] https://lore.kernel.org/lkml/20191014094912.GY2311@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ > > . >