Hi Alexandre, kernel test robot noticed the following build errors: [auto build test ERROR on soc/for-next] [also build test ERROR on linus/master v6.10-rc6 next-20240703] [cannot apply to arnd-asm-generic/master robh/for-next tip/locking/core] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Alexandre-Ghiti/riscv-Implement-cmpxchg32-64-using-Zacas/20240627-034946 base: https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next patch link: https://lore.kernel.org/r/20240626130347.520750-2-alexghiti%40rivosinc.com patch subject: [PATCH v2 01/10] riscv: Implement cmpxchg32/64() using Zacas config: riscv-randconfig-002-20240704 (https://download.01.org/0day-ci/archive/20240704/202407041157.odTZAYZ6-lkp@xxxxxxxxx/config) compiler: clang version 16.0.6 (https://github.com/llvm/llvm-project 7cbf1a2591520c2491aa35339f227775f4d3adf6) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240704/202407041157.odTZAYZ6-lkp@xxxxxxxxx/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@xxxxxxxxx> | Closes: https://lore.kernel.org/oe-kbuild-all/202407041157.odTZAYZ6-lkp@xxxxxxxxx/ All errors (new ones prefixed by >>): >> kernel/sched/core.c:11873:7: error: cannot jump from this asm goto statement to one of its possible targets if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) ^ include/linux/atomic/atomic-instrumented.h:4880:2: note: expanded from macro 'try_cmpxchg' raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ ^ include/linux/atomic/atomic-arch-fallback.h:192:9: note: expanded from macro 'raw_try_cmpxchg' ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ ^ include/linux/atomic/atomic-arch-fallback.h:55:21: note: expanded from macro 'raw_cmpxchg' #define raw_cmpxchg arch_cmpxchg ^ arch/riscv/include/asm/cmpxchg.h:212:2: note: expanded from macro 'arch_cmpxchg' _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n") ^ arch/riscv/include/asm/cmpxchg.h:189:3: note: expanded from macro '_arch_cmpxchg' __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \ ^ arch/riscv/include/asm/cmpxchg.h:144:3: note: expanded from macro '__arch_cmpxchg' asm goto(ALTERNATIVE("nop", "j %[zacas]", 0, \ ^ kernel/sched/core.c:11840:7: note: possible target of asm goto statement if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid)) ^ include/linux/atomic/atomic-instrumented.h:4880:2: note: expanded from macro 'try_cmpxchg' raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ ^ include/linux/atomic/atomic-arch-fallback.h:192:9: note: expanded from macro 'raw_try_cmpxchg' ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ ^ include/linux/atomic/atomic-arch-fallback.h:55:21: note: expanded from macro 'raw_cmpxchg' #define raw_cmpxchg arch_cmpxchg ^ arch/riscv/include/asm/cmpxchg.h:212:2: note: expanded from macro 'arch_cmpxchg' _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n") ^ arch/riscv/include/asm/cmpxchg.h:189:3: note: expanded from macro '_arch_cmpxchg' __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \ ^ arch/riscv/include/asm/cmpxchg.h:161:10: note: expanded from macro '__arch_cmpxchg' \ ^ kernel/sched/core.c:11872:2: note: jump exits scope of variable with __attribute__((cleanup)) scoped_guard (irqsave) { ^ include/linux/cleanup.h:169:20: note: expanded from macro 'scoped_guard' for (CLASS(_name, scope)(args), \ ^ kernel/sched/core.c:11840:7: error: cannot jump from this asm goto statement to one of its possible targets if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid)) ^ include/linux/atomic/atomic-instrumented.h:4880:2: note: expanded from macro 'try_cmpxchg' raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ ^ include/linux/atomic/atomic-arch-fallback.h:192:9: note: expanded from macro 'raw_try_cmpxchg' ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ ^ include/linux/atomic/atomic-arch-fallback.h:55:21: note: expanded from macro 'raw_cmpxchg' #define raw_cmpxchg arch_cmpxchg ^ arch/riscv/include/asm/cmpxchg.h:212:2: note: expanded from macro 'arch_cmpxchg' _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n") ^ arch/riscv/include/asm/cmpxchg.h:189:3: note: expanded from macro '_arch_cmpxchg' __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \ ^ arch/riscv/include/asm/cmpxchg.h:144:3: note: expanded from macro '__arch_cmpxchg' asm goto(ALTERNATIVE("nop", "j %[zacas]", 0, \ ^ kernel/sched/core.c:11873:7: note: possible target of asm goto statement if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) ^ include/linux/atomic/atomic-instrumented.h:4880:2: note: expanded from macro 'try_cmpxchg' raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ ^ include/linux/atomic/atomic-arch-fallback.h:192:9: note: expanded from macro 'raw_try_cmpxchg' ___r = raw_cmpxchg((_ptr), ___o, (_new)); \ ^ include/linux/atomic/atomic-arch-fallback.h:55:21: note: expanded from macro 'raw_cmpxchg' #define raw_cmpxchg arch_cmpxchg ^ arch/riscv/include/asm/cmpxchg.h:212:2: note: expanded from macro 'arch_cmpxchg' _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n") ^ arch/riscv/include/asm/cmpxchg.h:189:3: note: expanded from macro '_arch_cmpxchg' __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \ ^ arch/riscv/include/asm/cmpxchg.h:161:10: note: expanded from macro '__arch_cmpxchg' \ ^ kernel/sched/core.c:11872:2: note: jump bypasses initialization of variable with __attribute__((cleanup)) scoped_guard (irqsave) { ^ include/linux/cleanup.h:169:20: note: expanded from macro 'scoped_guard' for (CLASS(_name, scope)(args), \ ^ 2 errors generated. vim +11873 kernel/sched/core.c 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11821 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11822 static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid, 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11823 int cpu) 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11824 { 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11825 struct rq *rq = cpu_rq(cpu); 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11826 struct task_struct *t; 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11827 int cid, lazy_cid; 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11828 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11829 cid = READ_ONCE(pcpu_cid->cid); 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11830 if (!mm_cid_is_valid(cid)) 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11831 return; 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11832 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11833 /* 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11834 * Clear the cpu cid if it is set to keep cid allocation compact. If 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11835 * there happens to be other tasks left on the source cpu using this 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11836 * mm, the next task using this mm will reallocate its cid on context 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11837 * switch. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11838 */ 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11839 lazy_cid = mm_cid_set_lazy_put(cid); 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11840 if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid)) 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11841 return; 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11842 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11843 /* 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11844 * The implicit barrier after cmpxchg per-mm/cpu cid before loading 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11845 * rq->curr->mm matches the scheduler barrier in context_switch() 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11846 * between store to rq->curr and load of prev and next task's 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11847 * per-mm/cpu cid. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11848 * 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11849 * The implicit barrier after cmpxchg per-mm/cpu cid before loading 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11850 * rq->curr->mm_cid_active matches the barrier in 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11851 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11852 * sched_mm_cid_after_execve() between store to t->mm_cid_active and 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11853 * load of per-mm/cpu cid. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11854 */ 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11855 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11856 /* 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11857 * If we observe an active task using the mm on this rq after setting 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11858 * the lazy-put flag, that task will be responsible for transitioning 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11859 * from lazy-put flag set to MM_CID_UNSET. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11860 */ 0e34600ac9317d Peter Zijlstra 2023-06-09 11861 scoped_guard (rcu) { 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11862 t = rcu_dereference(rq->curr); 0e34600ac9317d Peter Zijlstra 2023-06-09 11863 if (READ_ONCE(t->mm_cid_active) && t->mm == mm) 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11864 return; 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11865 } 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11866 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11867 /* 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11868 * The cid is unused, so it can be unset. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11869 * Disable interrupts to keep the window of cid ownership without rq 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11870 * lock small. 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11871 */ 0e34600ac9317d Peter Zijlstra 2023-06-09 11872 scoped_guard (irqsave) { 223baf9d17f25e Mathieu Desnoyers 2023-04-20 @11873 if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) 223baf9d17f25e Mathieu Desnoyers 2023-04-20 11874 __mm_cid_put(mm, cid); 0e34600ac9317d Peter Zijlstra 2023-06-09 11875 } af7f588d8f7355 Mathieu Desnoyers 2022-11-22 11876 } af7f588d8f7355 Mathieu Desnoyers 2022-11-22 11877 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki