On Fri, Nov 10, 2017 at 10:07:56AM +0800, Wanpeng Li wrote: > >> Also, you should not put cpumask_t on stack, that's 'broken'. > > Thanks pointing out this. I found a useful comments in arch/x86/kernel/irq.c: > > /* These two declarations are only used in check_irq_vectors_for_cpu_disable() > * below, which is protected by stop_machine(). Putting them on the stack > * results in a stack frame overflow. Dynamically allocating could result in a > * failure so declare these two cpumasks as global. > */ > static struct cpumask affinity_new, online_new; That code no longer exists.. Also not entirely sure how it would be helpful. What you probably want to do is have a per-cpu cpumask, since flush_tlb_others() is called with preemption disabled. But you probably don't want an unconditionally allocated one, since most kernels will not in fact be PV. So you'll want something like: static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask); And then you need something like: for_each_possible_cpu(cpu) { zalloc_cpumask_var_node(per_cpu_ptr(&__pb_tlb_mask, cpu), GFP_KERNEL, cpu_to_node(cpu)); } before you set the pv-op or so.