On Mon 23-09-19 17:48:52, Peter Zijlstra wrote: > On Mon, Sep 23, 2019 at 05:28:56PM +0200, Michal Hocko wrote: > > On Mon 23-09-19 17:15:19, Peter Zijlstra wrote: > > > > > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c > > > > index 4123100e..9859acb 100644 > > > > --- a/arch/x86/mm/numa.c > > > > +++ b/arch/x86/mm/numa.c > > > > @@ -861,6 +861,9 @@ void numa_remove_cpu(int cpu) > > > > */ > > > > const struct cpumask *cpumask_of_node(int node) > > > > { > > > > + if (node == NUMA_NO_NODE) > > > > + return cpu_online_mask; > > > > > > This mandates the caller holds cpus_read_lock() or something, I'm pretty > > > sure that if I put: > > > > > > lockdep_assert_cpus_held(); > > > > Is this documented somewhere? > > No idea... common sense :-) I thought that and cpuhotplug were forbiden to be used in the same sentence :p > > Also how does that differ from a normal > > case when a proper node is used? The cpumask will always be dynamic in > > the cpu hotplug presence, right? > > As per normal yes, and I'm fairly sure there's a ton of bugs. Any > 'online' state is subject to change except when you're holding > sufficient locks to stop it. > > Disabling preemption also stabilizes it, because cpu unplug relies on > stop-machine. OK, I guess it is fair to document that callers should be careful when using this if they absolutely need any stability. But I strongly suspect they simply do not care all that much. They mostly do care to have something that gives them an idea which CPUs are close to the device and that can tolerate some race. In other words this is more of an optimization than a correctness issue. > > > here, it comes apart real quick. Without holding the cpu hotplug lock, > > > the online mask is gibberish. > > > > Can the returned cpu mask go away? > > No, the cpu_online_mask itself has static storage, the contents OTOH can > change at will. Very little practical difference :-) OK, thanks for the confirmation. I was worried that I've overlooked something. To the NUMA_NO_NODE itself. Your earlier email noted: : > + : > if ((unsigned)node >= nr_node_ids) { : > printk(KERN_WARNING : > "cpumask_of_node(%d): (unsigned)node >= nr_node_ids(%u)\n", : : I still think this makes absolutely no sense what so ever. Did you mean the NUMA_NO_NODE handling or the specific node >= nr_node_ids check? Because as to NUMA_NO_NODE I believe this makes sense because this is the only way that a device is not bound to any numa node. I even the ACPI standard is considering this optional. Yunsheng Lin has referred to the specific part of the standard in one of the earlier discussions. Trying to guess the node affinity is worse than providing all CPUs IMHO. -- Michal Hocko SUSE Labs