On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@xxxxxxxxx> wrote: > > On Sat, Mar 12, 2022 at 7:56 AM <guoren@xxxxxxxxxx> wrote: > > > > From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> > > > > These patch_text implementations are using stop_machine_cpuslocked > > infrastructure with atomic cpu_count. The origin idea is that when > > the master CPU patch_text, others should wait for it. But current > > implementation is using the first CPU as master, which couldn't > > guarantee continue CPUs are waiting. This patch changes the last > > CPU as the master to solve the potaintial risk. > > > > Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> > > Signed-off-by: Guo Ren <guoren@xxxxxxxxxx> > > Cc: Will Deacon <will@xxxxxxxxxx> > > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > > Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> > > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx > > Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx> > > Cc: Chris Zankel <chris@xxxxxxxxxx> > > Cc: Max Filippov <jcmvbkbc@xxxxxxxxx> > > Cc: Arnd Bergmann <arnd@xxxxxxxx> > > --- > > arch/arm64/kernel/patching.c | 4 ++-- > > arch/csky/kernel/probes/kprobes.c | 2 +- > > arch/riscv/kernel/patch.c | 2 +- > > arch/xtensa/kernel/jump_label.c | 2 +- > > 4 files changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c > > index 771f543464e0..6cfea9650e65 100644 > > --- a/arch/arm64/kernel/patching.c > > +++ b/arch/arm64/kernel/patching.c > > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg) > > int i, ret = 0; > > struct aarch64_insn_patch *pp = arg; > > > > - /* The first CPU becomes master */ > > - if (atomic_inc_return(&pp->cpu_count) == 1) { > > + /* The last CPU becomes master */ > > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) { > > atomic_inc_return returns the incremented value, so the last CPU gets > num_online_cpus(), not (num_online_cpus() - 1). Oops! You are right, thx. > > > for (i = 0; ret == 0 && i < pp->insn_cnt; i++) > > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], > > pp->new_insns[i]); > > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c > > index 42920f25e73c..19821a06a991 100644 > > --- a/arch/csky/kernel/probes/kprobes.c > > +++ b/arch/csky/kernel/probes/kprobes.c > > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv) > > struct csky_insn_patch *param = priv; > > unsigned int addr = (unsigned int)param->addr; > > > > - if (atomic_inc_return(¶m->cpu_count) == 1) { > > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > *(u16 *) addr = cpu_to_le16(param->opcode); > > dcache_wb_range(addr, addr + 2); > > atomic_inc(¶m->cpu_count); > > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c > > index 0b552873a577..cca72a9388e3 100644 > > --- a/arch/riscv/kernel/patch.c > > +++ b/arch/riscv/kernel/patch.c > > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data) > > struct patch_insn *patch = data; > > int ret = 0; > > > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > ret = > > patch_text_nosync(patch->addr, &patch->insn, > > GET_INSN_LENGTH(patch->insn)); > > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c > > index 61cf6497a646..7e1d3f952eb3 100644 > > --- a/arch/xtensa/kernel/jump_label.c > > +++ b/arch/xtensa/kernel/jump_label.c > > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data) > > { > > struct patch *patch = data; > > > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > local_patch_text(patch->addr, patch->data, patch->sz); > > atomic_inc(&patch->cpu_count); > > } else { > > -- > > 2.25.1 > > > > > -- > Thanks. > -- Max -- Best Regards Guo Ren ML: https://lore.kernel.org/linux-csky/