Hi, On 8/22/2022 4:13 PM, Sebastian Andrzej Siewior wrote: > On 2022-08-21 11:32:21 [+0800], Hou Tao wrote: >> process A process B >> >> htab_map_update_elem() >> htab_lock_bucket() >> migrate_disable() >> /* return 1 */ >> __this_cpu_inc_return() >> /* preempted by B */ >> >> htab_map_update_elem() >> /* the same bucket as A */ >> htab_lock_bucket() >> migrate_disable() >> /* return 2, so lock fails */ >> __this_cpu_inc_return() >> return -EBUSY >> >> A fix that seems feasible is using in_nmi() in htab_lock_bucket() and >> only checking the value of map_locked for nmi context. But it will >> re-introduce dead-lock on bucket lock if htab_lock_bucket() is re-entered >> through non-tracing program (e.g. fentry program). >> >> So fixing it by using disable_preempt() instead of migrate_disable() when >> increasing htab->map_locked. However when htab_use_raw_lock() is false, >> bucket lock will be a sleepable spin-lock and it breaks disable_preempt(), >> so still use migrate_disable() for spin-lock case. > But isn't the RT case still affected by the very same problem? As said in patch 0, the CONFIG_PREEMPT_RT && non-preallocated case is fixed in patch 2. > >> Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx> > Sebastian > .