On Thu, Nov 3, 2022 at 9:36 PM Frederic Weisbecker <frederic@xxxxxxxxxx> wrote: > > On Thu, Nov 03, 2022 at 09:13:13PM +0800, Pingfan Liu wrote: > > Clarify at first: > > This issue is totally detected by code suspicion, not a real experience. > > > > Scene: > > > > __srcu_read_(un)lock() uses percpu variable srcu_(un)lock_count[2]. > > Normally, the percpu can help avoid the non-atomic RMW issue, but in > > some rare cases, it can not. > > > > Supposing that __srcu_read_lock() runs on cpuX, the statement > > this_cpu_inc(ssp->sda->srcu_lock_count[idx]); > > can be decomposed into two sub group: > > -1. get the address of this_cpu_ptr(ssp->sda)->srcu_lock_count[idx], > > denoted as addressX and let unsigned long *pX = addressX; > > -2. *pX = *pX + 1; > > It's not supposed to happen: > > * The weak version of this_cpu_inc() disables interrupts during the whole. > * x86 adds directly to gs/fs memory > * arm64, loongarch, s390 disable preemption > > This has to be a fundamental constraint of this_cpu_*() ops implementation. > Thanks! I find the exact implementation in the code, also there is notes about this_cpu_*() api, which says they are safe from preemption and interrupt. Regards, Pingfan